Haven't found the Essay You Want?
For Only $12.90/page

The study of design research methodology Essay


Studies on design research methodology are infrequent, although there is a consensus that more e ort is needed for improving design research quality. Previous calls for exercising better research methodology have been unsuccessful. As numerous studies reveal, there is no single scienti c methodology that is exercised in science or in any other research practice. Rather, research methodologies are socially constructed. Since some constructions are better than others for di erent purposes, it becomes valuable to study di erent methodologies and their in uence on research practice and results. Proposals for such studies are overed.

1 The state of design research methodology
In many disciplines, research methodology is seldom discussed by researchers. Such neglect may result from several attitudes towards research methodology including indi erence or ignorance. Researchers may be indi erent because their research is well received by the community therefore they need not change or worry about it; or researchers may perceive their practice as science and wish to adopt as their methodology what they perceive to be the methodology used by scientists, henceforth referred to as the received scienti c methodology. Roughly, the received scienti c methodology consists of several steps: (1) observations or preliminary studies, (2) hypothesis formation, (3) hypothesis testing, (4) hypothesis evaluation, and (5) hypothesis acceptance or rejection. It is asserted that results of research discovered by this methodology lead to applied research and subsequently, to practical impact. In contrast to this assertion, it is proclaimed that the goal of this methodology is to advance knowledge for its own sake and not address practical needs nor be responsible for delivering practical results. Most researchers would rarely question this methodology, but since it is impossible to follow or even hard to approximate, researchers who would claim to have adopted it, would not practice it. Indi erence may be caused by ignorance; often researchers are not familiar with the details of, and the controversies about, the received scienti c methodology. They are unaware of the alternatives of this methodology that we brie y mention later, their practice, and consequences. In fact, most researchers interpret methodology as a fancy synonym for method, while methodology is (or attempts to approximate) a compatible collection of assumptions and goals underlying methods, the methods, and the way the results of carrying the methods out are interpreted and evaluated. The ability to validate the attainment of research assumptions and goals through the evaluations is a critical factor in making the above collection compatible. The di erence in meanings assigned to the term methodology can be illustrated through an example from structural optimization. One research method of structural design involves the development of optimization procedures and their testings on benchmark problems. Most researchers will call this method \research methodology.” However, the assumptions underlying such work (e.g., that optimization is a good model of structural design) and its testing (e.g., that simple benchmark problems are representatives of the complex structural designs performed by designers), or the believe that such research advances practice (e.g., that designers use optimization programs developed in research and that designers’ practice bene ts from them), are rarely articulated thus rarely validated.

If these issues would be addressed, the conclusions would probably contradict those implicit assumptions. First, independent of any discipline, optimization is a very restricted view of design (even with respect to Simon’s (1981) restricted view). Second, results obtained on simple benchmark problems do not necessarily transfer to real design problems nor do they re ect performance on other benchmark problems (Haftka and Sobieski, 1992); simple benchmark comparisons provide little understanding of the relative merit of di erent optimization procedures (Burns, 1989). Third, practitioners are very reluctant to use optimization procedures (Adelman, 1992; Haftka and Sobieski, 1992). This reluctance contradicts the implicit or stated research goals of improving structural design practice.

Indi erence or ignorance towards research methodology relieve researchers from addressing such contradictions or exercising informed choices between methodologies in their research. Many researchers simply follow the method of their close senior peers without questioning or even knowing the assumptions that underlie it. In most cases, only the method|the actual research activity|is transferred to research apprentices. Thus, driven by social proximity, research assumptions become part of the implicit unarticulated research culture.

Infrequently, this state of a airs had called the attention of researchers. In 1987, two representative papers critical of the state of design research practice were published, one by Antonsson (1987) and the other by Dixon (1987). Both papers advocated adopting the scienti c methodology in design research either for improving research quality or for improving design practice. These and other related papers elicit almost no response from the research community. Since their publication, the state of design research methodology has remained virtually unchanged. Such reaction raises at least two questions: what may have caused this response and if this is an expected reaction, is the state of research methodology worth additional discussions? Two plausible answers that originate from two di erent interpretations of Dixon and Antonsson’s papers justify further discussions.

First, Dixon and Antonsson’s positions may have been interpreted as criticizing the intellectual de ciency of research and demanding from researchers to exercise a methodology di erent from the one they actually use and one that requires additional e ort. In particular, the methodology Transactions of the ASME, Journal of Mechanical Design, 1995, in press proposed demanded researchers to seriously test their hypotheses. It might have been expected that such requests would be opposed to or, worst, be ignored. Second, researchers who are familiar with current views in the philosophy of science may have treated Dixon or Antonsson’s positions as being too simpli ed if they interpreted these positions as advocating for the received scienti c view. Since the stated goal of science is creating knowledge for the sake of knowing, but not necessarily knowledge that is relevant to practice, the received scienti c methodology may hinder improving practice by detaching the products of research (i.e., design theories) from actual practice (Argyris, 1980; Reich, 1992). According to this interpretation and its limitation, previous calls for improving research methodology could not have impacted design practice even if researchers had adopted them. If design practice is indeed a goal of design research, di erent methodologies may be needed to establish a connection between research and practice (Reich et al, 1992; Reich, 1994a; Reich, 1994b). These methodologies can evolve in various ways including studying researchers’ activities and the way these activities correlate with research progress, thereby identifying the relationships between di erent assumptions, methods, and consequences.

I have no intention to select between these two interpretations or to develop others but to explain how to improve research practice without assuming a xed methodology. To start with, we must acknowledge that there are di ering views about scienti c methodology (Kourany, 1987). In addition, we must acknowledge studies on science and technology demonstrating that scienti c progress is in uenced by social, cultural, and political factors. Researchers in various sciences are increasingly acknowledging that knowledge is socially constructed (Pickering, 1992), and knowledge of design, in particular (Konda et al, 1992; Monarch et al, 1993). Moreover, the social in uence on research practice includes aspects such as: shaping research goals according to available grants or unarticulated interests; publishing papers to receive tenure or to justify traveling to conferences; and fraud (Bell, 1992; Broadbent, 1981).

The rst studies on the social dimensions of science analyzed the progress of the \hard” sciences such as chemistry or physics (Feyerabend, 1975; Kuhn, 1962). More recently, historical or re ective studies in science and engineering have begun addressing the social aspects underlying research and the need for di erent methodologies if practical impact is sought. These disciplines include: management science (Argyris, 1980), education (Guba, 1990), public policy (Palumbo and Calista, 1990), information systems (Bjerknes et al, 1987), cell biology (Grinnell, 1982), design in general (Broadbent, 1981), structural design (Addis, 1990; Timoshenko, 1953), solid mechanics (Bucciarelli and Dworsky, 1980), and even mathematics (DeMillo et al, 1979). Moreover, the social aspects manifested themselves in unexpected circumstances and in resolving seemingly trivial issues such as the implementation of computer arithmetic (MacKenzie, 1993)|the most basic infrastructure for much engineering design research and practice.

The importance of the aforementioned studies is twofold. First, they reject the received scienti c view as the means for formulating theories and as a means for improving practice. Second, they acknowledge and demonstrate that research methodology is a subject of study and constant improvement, and that gaining insight into the procedures of doing research can improve research itself. Since science is a social enterprise, the study of research methodology is mandatory for providing guidance in the maze of methodologies and in monitoring the quality of research. In order to sustain credibility, researchers must use and demonstrate that the techniques they develop in design research have some relevance to practice. Moreover, since funding agencies Transactions of the ASME, Journal of Mechanical Design, 1995, in press researchers to work towards improving design practice (National Research Council, 1991), researchers need to understand what kinds of studies are useful in practice, how are such studies conducted within budget limits, and which factors account for the di usion of studies’ results into practical engineering.

2 Studying research methodology

Researchers may nd it fruitful to study: the objectives or goals of engineering design research; how can these objectives be ful lled through research; how can progress towards research goals be tested; and how can this overall process be improved. Such study will evolve a repository of methods with their assumptions, interpretations, successes and failures. This is the essence of studying engineering design research methodology.

This view does not advocate for nor lead to anarchy. Furthermore, the evolving nature of methodology does not empty the usefulness of some principles for evaluating scienti c theories (e.g., such as those acknowledged even by Kuhn, 1987), nor does it mean that methodology is merely an art (Beveridge, 1957) that is not amenable to systematic study. It only acknowledges that the assumptions underlying methodologies and their potential e ectiveness and drawbacks for conducting certain types of research projects must be studied. We now illustrate the study of research methodology by elaborating some issues related to Antonsson’s six-step methodology (1987, p. 154). Each of the steps raises issues that need further study. These issues are not startling; some are familiar while others are not. Unfortunately, most of them are neglected all too often.

(1),(2) Propose/hypothesize that a set of rules for design can elucidate part
of the design process and develop those rules. Several questions arise about the actual execution of this activity. What is a good source of such rules? Are (un)successful designs (Petroski, 1989; Suh, 1990), patents previously issued (Arciszewski, 1988) or design textbooks (Aguirre and Wallace, 1990) good sources? Is studying human designers useful (Subrahmanian, 1992)? The answer is obviously a rmative; nevertheless, rarely are these sources consulted. If studying human designers is useful, how do di erent ways of studying a ect the usefulness of the rules hypothesized? Inarguably, such studies bring to bear research methods from psychology and sociology into play in design research. For example, how are designers’ activities being coded in observational studies? Is the coding scheme tested for reliability by using at lease two coders? Are the results statistically valid? Which criteria may be used for selecting candidate hypotheses for further testing? Can the subjective bias in this selection be reduced?

Note that the above questions raise a related question. Consider trading the quality of the design rules proposed with the resources to nd them. What kind of information is needed for making a sensible trade o and how can this information be collected and organized? (3) Have novice designers learn the rules and apply them.

How is the above learning process taking place? Are the designers being taught thus introducing teachers’ bias? Or do they learn the rules on their own, potentially by solving Transactions of the ASME, Journal of Mechanical Design, 1995, in press other design problems, thereby excluding the exercise of some measure of control? How are problems selected such that novice designers can solve them yet such that they are relevant to real practice. For that matter, how relevant is any laboratory experiment to real design? This critical question leads researchers in other disciplines as well as in design to use different methods such as ethnography and participatory research while studying designers. See (Reich et al, 1992; Subrahmanian, 1992; Reich, 1994a) for additional details.

Are benchmark problems used by di erent researchers to allow for the replication of results? Is performance on benchmark problems indicative of performance on other problems or on real design? Is it possible to replicate results relevant to real design? Can rules for multidisciplinary design be hypothesized and tested in the same manner? If the common view of science is adopted, this study must be controlled to be valid. One minimal requirement is that another group of designers participate in the study, potentially novice designers that did not study the new design rules. Note, however, that since the rst group of novice designers are trained with the new rules, the second group must receive similar training with default or irrelevant rules. Furthermore, members of the groups must not know which group was trained with the new rules. A better study may also include two groups of expert designers, one that learns the rules and another that learns the default rules. The latter may provide better indication about the relative merit of the new design rules with respect to existing design practice. In contrast, if the study follows a di erent methodology such as participatory research (Reich et al, 1992; Whyte, 1991), the nature of the study would change signi cantly into long-term case studies where real design problems are addressed. Exercising common scienti c methods in this methodology may damage research (Blumberg and Pringle, 1983). (4) Measure the design productivity of the rules.

How is productivity being measured? Which criteria are included in the measurement: quality of design, time to design, or revenue of manufacturer? Do the measures used adhere to the principles of measurement theory (Roberts, 1979; Reich, 1995), or are they ad hoc and meaningless?

Do independent designers than those who created the designs, or do potential customers, participate in this measurement? Can the quality of design be assessed without manufacturing it and subjecting it to actual use? How relevant will abstract measurements be to practical design? Is the measurement quantitative or is qualitative information being gathered as well? (5) Evaluate the results to con rm or refute the hypothesis. How is the measured data evaluated? What are the criteria that determine whether a hypothesis was con rmed or refuted? Are these criteria general or context dependent? Note that most philosophers of science including Popper and Kuhn reject the existence of such criteria (Weimer, 1979).

Are the criteria correlated with real design? That is, could not researchers nd designers successfully employing design rules that were refuted by researchers? For example, Fritts et al. (1990, p. 478) describe engineers using theories that produce erroneous results with respect to experiments but that have a pragmatic utility of di erentiating between candidate designs.

Are hypotheses really refuted or con rmed or are di erent hypotheses found to be useful Transactions of the ASME, Journal of Mechanical Design, 1995, in press in different contexts?

When is it possible to disregard experimental evidence in favor of keeping a hypothesis (Agassi, 1975)? When can experiments be harmful to progress (Truesdell, 1982)? Does a failure of a hypothesis constitute a failure of a research project or can it provide useful information worth reporting? Will archival journals publish such a report? (6) Re ne the hypothesis. The comments on items (1) and (2) apply here. Moreover, How does one diagnose a faulty hypothesis to accommodate empirical testing? When is re nement insu cient to address the failure of a hypothesis and a new \worldview” must be adopted? The above expansion of Antonsson’s proposal re ects the complexity, richness, and necessity of studying research methodology. It illustrates that the design of a research activity is complex and di cult. It hints that some activities that lead to research successes may fail other research and that some activities may not be compatible with some methodologies. Furthermore, research failures (OR SUCCESSES) can lead to practical successes (or failures). Therefore, it is critical to identify where methods fail or succeed and in relation to which assumptions.


Science does not progress according to a distinctive methodology, nor could engineering design research; especially not if the goal is advancing design practice and not some abstract `understanding.’ Di erent research scenarios consisting of di erent goals, disciplines, and cultural settings, may call for di erent research methodologies for attaining the stated goals. Research involves design and therefore design researchers must be re ective continuously. This paper illustrated how researchers can be re ective upon their research methodology. If researchers object to such re ection, they risk losing credibility and, more importantly, lose the chance of discovering whether their work is meaningful.


The ideas expressed in this paper bene ted from discussions with Suresh Konda, Sean Levy, Shoulamit Milch-Reich, Ira Monarch, and Eswaran Subrahmanian. This work was done partly while the author was with the Department of Civil and Environmental Engineering, Duke University, Durham, NC. and the Engineering Design Research Center, Carnegie Mellon University, Pittsburgh, PA.

Addis, W. (1990). Structural Engineering: The Nature of Theory and Design, Ellis Horwood, New York NY.
Adelman, H. M. (1992). \Experimental validation of the utility of structural optimization.” Structural Optimization, 5(1-2):3{11.
Agassi, J. (1975). Sciene in Flux, D. Reidel Publishing Company, Dordrecht.

Transactions of the ASME, Journal of Mechanical Design, 1995, in press Aguirre, G. J. and Wallace, K. M. (1990). \Evaluation of technical systems at the design stage.” In Proceedings of The 1990 International Conference on Engineering Design, ICED-90 (Dubrovnik). Antonsson, E. K. (1987). \Development and testing of hypotheses in engineering design research.” ASME Journal of Mechanisms, Transmissions, and Automation in Design, 109:153{154. Arciszewski, T. (1988). \ARIZ 77: An innovative design method.” Design Methods and Theories, 22(2):796{ 820.

Argyris, C. (1980). Inner Contradictions of Rigorous Research, Academic Press, New York, NY. Bell, R. (1992). Impure Science: Fraud, Compromise, and Political In uence in Scienti c Research, Wiley, New York, NY.

Beveridge, W. I. B. (1957). The Art of Scienti c Investigation, Norton, New York, NY, Revised edition. Bjerknes, G., Ehn, P., and Kyng, M., editors (1987). Computers and Democracy: A Scandinavian Challenge, Gower Press, Brook eld, VT.

Blumberg, M. and Pringle, C. D. (1983). \How control groups can cause loss of control in action research: The case of Rushton Coal Mine.” Journal of Applied Behavioral Science, 19(4):409{425. Broadbent, G. (1981). \The morality of designing.” In Design: Science: Method, Proceedings of The 1980 Design Research Society Conference, pages 309{328, Westbury House, Guilford, England. Bucciarelli, L. L. and Dworsky, N. (1980). Sophie Germain: An Essay in the History of Elasticity, D. Reidel, Dordrecht, Holland.

Burns, S. A. (1989). \Graphical representations of design optimization processes.” Computer-Aided Design, 21(1):21{24.
DeMillo, R. A., Lipton, R. J., and Perlis, A. J. (1979). \Social processes and proofs of theorems and programs.” Communication of the ACM, 22:271{280.
Dixon, J. R. (1987). \On research methodology towards a scienti c theory of
engineering design.” Arti cial Intelligence for Engineering Design, Analysis and Manufacturing, 1(3):145{157. Feyerabend, P. K. (1975). Against Method, New Left Books, London, UK. Fritts, M., Comstock, E., Lin, W.-C., and Salvasen, N. (1990). \Hydro-numeric design: Performance prediction and impact on hull design.” Transactions SNAME, 98:473{493. Grinnell, F. (1982). The Scienti c Attitude, Westview Press, Boulder, CO. Guba, E. G., editor (1990). The Paradigm Dialog, Sage Publications, Newbury Park, CA. Haftka, R. T. and Sobieski, J. (1992). \Editorial: The case for helping consumers of research.” Structural Optimization, 4(2):63{64.

Konda, S., Monarch, I., Sargent, P., and Subrahmanian, E. (1992). \Shared memory in design: A unifying theme for research and practice.” Research in Engineering Design, 4(1):23{42. Kourany, J. A., editor (1987). Scienti c Knowledge: Basic Issues in the Philosophy of Science, Wadsworth, Belmont, CA.

Kuhn, T. S. (1962). The Structure of Scienti c Revolution, The University of Chicago Press, Chicago, IL. Kuhn, T. S. (1987). \Objectivity, value judgment, and theory choice.” In Kourany, J. A., editor, Scienti c Knowledge: Basic Issues in the Philosophy of Science, pages 197{207, Belmont, CA, Wadsworth. MacKenzie, D. (1993). \Negotiating arithmetic, constructing proof: The sociology of mathematics and information technology.” Social Studies of Science, 23(1):37{65.

Transactions of the ASME, Journal of Mechanical Design, 1995, in press Monarch, I. A., Konda, S. L., Levy, S. N., Reich, Y., Subrahmanian, E., and Ulrich, C. (1993). \Shared memory in design: Theory and practice.” In Proceedings of the Invitational Workshop on Social Science Research, Technical Systems and Cooperative Work (Paris, France), pages 227{241, Paris, France, Department Sciences Humaines et Sociales, CNRS.

National Research Council (1991). Improving Engineering Design: Designing For Competitive Advantage, National Academy Press, Washington, DC.
Palumbo, D. J. and Calista, D. J., editors (1990). Implementation and The Policy Process: Opening Up The Black Box, Greenwood Press, New York, NY.
Petroski, H. (1989). \Failure as a unifying theme in design.” Design Studies, 10(4):214{218. Pickering, A., editor (1992). Science as Practice and Culture, The University of Chicago Press, Chicago, IL. Reich, Y., Konda, S., Monarch, I., and Subrahmanian, E. (1992). \Participation and design: An extended view.” In Muller, M. J., Kuhn, S., and Meskill, J. A., editors, PDC’92: Proceedings of the Participatory Design Conference (Cambridge, MA), pages 63{71, Palo Alto, CA, Computer Professionals for Social Responsibility.

Reich, Y. (1992). \Transcending the theory-practice problem of technology.” Technical Report EDRC 12-51-92, Engineering Design Research Center, Carnegie Mellon University, Pittsburgh, PA. Reich, Y. (1994). \Layered models of research methodologies.” Arti cial Intelligence for Engineering Design, Analysis, and Manufacturing, 8(4):(in press).

Reich, Y. (1994). \What is wrong with CAE and can it be xed.” In Preprints of Bridging the Generations: An International Workshop on the Future Directions of Computer-Aided Engineering, Pittsburgh, PA, Department of Civil Engineering, Carnegie Mellon University. Reich, Y. (1995). \Measuring the value of knowledge.” International Journal of Human-Computer Studies. (in press).

Roberts, F. S. (1979). Measurement Theory with Applications to Decisionmaking, Utility, and the Social Sciences, Encyclopedia of Mathematics and its Applications, Vol. 7, Addison Wesley, Reading, MA. Simon, H. A. (1981). The Sciences of The Arti cial, MIT Press, Cambridge, MA, 2nd edition. Subrahmanian, E. (1992). \Notes on empirical studies of engineering tasks and environments, invited position paper.” In NSF Workshop on Information Capture and Access in Engineering Design Environments (Ithaca, NY), pages 567{578. Suh, N. P. (1990). The Principles of Design, Oxford University Press, New York, NY. Timoshenko, S. P. (1953). History of Strength of Materials: With a Brief Account of the History of Theory of
Elasticity and Theory of Structures, McGraw-Hill, New York, NY. Truesdell, C. (1982). \The disastrous e ects of experiment upon the early development of thermodynamics.” In Agassi, J. and Cohen, R. S., editors, Scienti c Philosophy Today: Essays in Honor of Mario Bunge, pages 415{423, Dordrecht, D. Reidel Publishing Company.

Weimer, W. B. (1979). Notes on the Methodology of Scienti c Research, Lawrence Erlbaum, Hillsdale, NJ. Whyte, W. F., editor (1991). Participatory Action Research, Sage Publications, Newbury Park, CA.

Transactions of the ASME, Journal of Mechanical Design, 1995, in press

Essay Topics:

Sorry, but copying text is forbidden on this website. If you need this or any other sample, we can send it to you via email. Please, specify your valid email address

We can't stand spam as much as you do No, thanks. I prefer suffering on my own