The group presented Xerox and NASA as case studies for Information and Knowledge Management concepts and processes. These two case studies are chosen for their unique experiences. It is not the focus of this material to compare and contrast the two cases. The objective of this material is to discuss how the key issues that were chosen by the group relate to the growing literature about Information and Knowledge Management. Along with this, efforts to differentiate Information Management and Knowledge Management provide a better grasp of these two concepts, their practices and their roles in organizations.
Xerox’s case focused on its organizations efforts to arrive at accessing its knowledge by transforming tacit into explicit. Prior to the establishment of its knowledge base system, in the form of Eureka II, ethnographic work on Xerox employees provided an initiative for the company to grow an interest in managing its organizational learning. The key issues from this study were identified. Andrew Cox’s investigation of Xerox’s Eureka system and its inception seems to indicate that knowledge management is a re-packaged concept of information management.
The employment of Orr’s ethnographic work implicates a possibility that the organization used this innovation as its own “rebranding tool”. (Suchman in Cox, 2007, p. 7). Furthermore, the group inquires about the concept of communities of practice (CoPs) as a more promising alternative to the technical approach implemented in Eureka’s construction as knowledge base system. The case study of Xerox provided an opportunity to glimpse how an organization behaves and transforms its practices to create innovations and, in turn, keep its stability and competitive edge in the market.
The special case of NASA’s Challenger incident is an important specimen to investigate how an organization manages its security and safety practices. The nature of NASA as an organization can be understood by examining its vision, mission and goals. However, the group focused on a particular incident and its chronology. Risk analysis and risk management are some of the concepts that relate to this organization’s experience in information and knowledge management.
Some of the key concerns for this case are identified as: (1) Should NASA have a Eureka-type system? ; (2) Do you think communities of practice would have helped NASA? ; and (3) How would NASA have benefited from going through the alignment process? This material attempts to clarify these inquiries and to provide direction by citing numerous studies and concepts in the field of knowledge management. Discussion NASA’s case, particularly of Challenger incident, can be examined by using the concept of risk assessment and management.
Risk management is defined in NASA’s literature as “ a management process by which the safety risks can be brought to levels or values that are acceptable to the final approval authority. ” (ASEB, p. 79). Processes such as the establishment of acceptable risk levels, formalization of changes in system design or operational method to achieve such risk levels , system validation and certification and system quality assurance were enumerated as part of risk management. ASEB, p 79). After the Challenger incident, recommendations such as hierarchical tasks were cited: The Committee believes that risk management must be the responsibility of line management (i. e. , program manager and, ultimately, the Administrator of NASA).
Only this program management, not the safety organizations, can make judicious use of means available to achieve the operational goals while reducing the safety risks to acceptable levels. Safety organizations cannot, however, assure safe operation; they can only assure that the safety risks have been properly evaluated, and that the system configuration and operation is being controlled to those risk levels which have been accepted by top management. (4. 1, 4. 3) (ASEB, p. 79) The passage above was one of the lessons learned, the identification of the elements of and responsibilities for risk assessment and risk management, as cited in the assessment document in 1988.
Establishment of responsibility for program direction and integration, the need for quantitative measures of relative risk, the need for integrated review and overview in the assessment of risk and in independent evaluation of retention rationales, independence of the certification of flight hardware and of software validation and verification, and safety margins for flight structures were all cited as lessons learned following the Challenger accident. Clearly, risk management is an aspect of information management in this particular case.
Aside from the initiatives for changes in the areas of risk assessment and management, examining NASA as a learning organization might helpfully illustrate the information processes within the organization. Organizational learning in NASA can be traced back to Apollo era when centralization of shuttle management structure was adopted. (Mahler & Casamayou, 2009, p. 164). But these lessons can be unlearned as what the case of the Challenger accident had shown. Prior to the Columbia accident that followed in 2003, unlearning in critical decision areas occurred.
Mahler & Casamayou (2009) relates this event as follows: Similarly, there was initial evidence that NASA had learned to resist schedule pressures. The agency delayed launches to deal with ongoing technical problems and made the decision to rely on the shuttle only when absolutely needed, But these lessons from the Challenger faded in the 1990s under severe budget constraints and new schedule pressures created by our participation in the International Space Station. (2009, p. 164)
This relates how outside forces can affect organizational learning. Public organizational learning, not unlike corporate organizational learning, is affected by its environment. Risk assessment and management, instead of context in market competence and capital gains in corporations, becomes the context of reliable goals and public stature in public organizations such as NASA. There are particulars of public organizational learning that should be brought to light to better understand the information processes and learning behaviors within NASA.
Mahler & Casamayou (2009) enumerated a three-part process of organizational learning. One is problem recognition, another is analyzing the results to produce inferences about cause and effect in the hopes of arriving at an understanding how to achieve better results, and the last one is the institution of new knowledge that the organization will benefit from. (Mahler & Casamayou, 2009, p. 166). These processes summarizes the processes of public organizational learning on a macro-level.
It is also important to examine the interactions of actors within the organization. During the group presentation, inquiries about how NASA should benefit from a Eureka-type system was mentioned. The concept of communities of practice (CoPs) within NASA, as a source of Andrew Cox’s (2004)non-canonical knowledge was explored. Before the establishment of any knowledge base systems, an eventful experience is treated as a learning source. As with NASA’s case, following the Apollo era, detection systems were installed and had been reliable ever since its inception.
These quantitative measures of assessing risks and failures runs parallel with the practice of corporate organizations over-reliance on technology, as what the firs-generation knowledge management practice brought us its “IT trap”. (Huysman & Wulf, 2006). What should also be noted is the transformation of NASA into a complex system of actors, decision makers and diagnostic and technological tools. As a system becomes more complex, there could be a higher possibility for unexpected and undesirable outcomes. The concept of Charles Perrow’s (1999) normal accident theory is closely related to this inference.
The nature of function and decision-making within NASA exemplifies Perrow’s concepts of coupling and interactions. Interactions can be tightly coupled or loosely coupled, as with NASA’s case it is of course tightly coupled. These tightly coupled interactions found within an organization cannot tolerate delay. Interactions can be linear or complex. (Perrow,1999). As with NASA’s case, it is undoubtedly complex. As mentioned earlier, the possible over-reliance on diagnostic systems and isolation of decision makers and pressure to launch are accountable for the incident.
Judging and perceiving also play a role in learning. Decisions in NASA’s case are measure-based and as well as judgement-based which could be said, is more reliant on intuition and non-verbal experience. But in this case, NASA’s critical decision actors were not thoroughly immersed in the safety measure practice which in turn shows that organizational structure has a role in the incident. As Baumard (1999) related in his work on tacit knowledge in organizations, ‘puzzled organizations’ are manifest when accidents take place.
The notion of ‘acceptability’ was, in effect, a social construction developed in the context of an organization in which the perception of risk thresholds had been modified by the routinization of the mastery of a complex technology. If the O-ring problem had been brought to the attention of an untrained public it would quite probably have provoked an animated reaction. In a different social context it would have been found entirely ‘unacceptable’ to launch space shuttles with joints that risked giving way, whatever the level of this risk.
Despite the accuracy, the precision to categorize the risk associated with the joints as ‘acceptable’ seems to be based more on the common meaning of the word ‘acceptable’ than on any scientific definition. There is no equivalent to the ‘acceptable’ in other areas of exact science—it is a value judgement, not a measure. This suggests that, it ‘reality is hidden by measures’ (Berry, 1983), measures too may be sometime hidden by reality. The road to disaster in the Challenger shuttle case was clearly of social construction.