A Response to the Critics
If everyday design were ruled by aesthetics, Iife might be more pleasing to the eye but less comfortable; if ruled by useability, it might be more comfortable but uglier. If cost or ease of manufacture dominated, products might not be attractive, functional, or durable’. Clearly, each consideration has its place. Trouble occurs when one dominates all the others. (Norman, 1988, p.151)
The phone in the office rang and it was Larry Lipsitz, editor of Educational Technology. “Walt, I got another one!” “Another what?” I asked, wondering why I should know what Larry had.” Another one of those articles that says that ISD results in boring instruction. I get these all the time. Why don’t you write something on creativity and instructional design? There has to be some response, doesn’t there? Don’t you feel an obligation to respond?” The last sentence was clearly an emotional trigger to get me to commit to writing an article on this topic. Since I’m a consulting editor, and thereby get a free subscription to this pricey publication, it seemed like a reasonable challenge. I responded, with firmness in my voice, “Let me think about it.”
Time went by and I did think about it. The issue of creativity is being raised by the Situated Cognition/Constructivist/Anchored Instruction crowd (collectively referred to in this article as “the critics”), who are into really interesting, and sometimes exciting, stuff. I use the word “stuff” because I am not convinced that everything they create is instruction, and, in fact, neither are they.
Part of the critics’ continuing justification for the validity of their ideas about the desirability of creating learning environments is that traditional instructional designers, who follow the holy grail of ISD, create boring instruction. “They” claim that these designers, who in real life are perfectly fine, intelligent people, find themselves following a linear process that results in effective and efficient instruction, but instruction that isn’t very interesting or engaging. “They” say that the ISO process is the culprit.
While I knew what I thought about the issues raised by the critics, I decided to find out what four of our Masters students thought about them. As part of their comprehensive examination at the end of the Masters program in Instructional Systems at Florida State University, I asked Stacey Kaufman, Karen Rice, Iris Seet, and Monte Watkins how they would define creative instruction and to indicate if they thought the ISD process resulted in boring instruction. If it does, how could the process be changed? As you might suspect, none of the students thought there was a problem with the process; the problem, if there is one, lies elsewhere. They wrote in formative answers to my questions, and I acknowledge their contributions to the thoughts that follow.
The historical roots of much of what today is referred to as instructional design was Skinnerian psychology, especially as it was manifested in programmed instruction (Pl) (Dick, 1987). Skinner’s reinforcement theory was used to create carefully constructed self-instructional materials which provided students with a series of small frames of information, and questions and answers related to that in formation. A process similar to what we now refer to as formative evaluation was used by programmers to alter the instruction to minimize the errors made by students as they went through it.
Because students learned at their own pace, there was no group interaction built into the instruction. The purpose was to provide students with materials from which they could master a topic or skill.
The PI approach was used in a few materials for public schools and universities, and more extensively by the military. While there is no official death certificate, most unbiased observers would probably agree that programmed instruction died from the boredom expressed by students. Many students rebelled at the small steps and the forced writing of obvious answers to questions. They didn’ t find the feedback very reinforcing.
While the PI form at did not survive the 60s, the general approach that was used to create it did. Skinnerian ideas were supplemented with those of the three Roberts: Robert Mager on objectives, Robert Glaser on criterion-referenced testing, and, perhaps structure for the field has been endorsed by the orders paid in advance. Most importantly, Robert M. Gagne’s events of instruction and conditions of learning (Dick & Carey, 1990). While the model used by designers has expanded and changed since the early days of programmed instruction, the critics claim that we still have a problem with boredom; that we are not very creative.
Do we have an official measuring stick for boredom? Or, for that matter, for creativity? No, we don’t. Research psychologists have measured creativity by asking people to name as many uses for, say, a brick as they could think of, or to form as many analogies as possible in a given period of time. The more answers, the more creative the person. But what do we mean by creative design? I don’t really think that that question can be answered to each reader’s satisfaction. Therefore, I will assume that each has a general idea of what these characteristics are, and could recognize them in his or her own domain. Actually, later in this article we will have specific suggestions about how we should define and measure creativity in the design context.
It should be acknowledged that this is not an empirical paper in which data will be presented that prove that designers who use a systematic instructional design process produce creative instruction. I am not aware of any data that indicate the degree of creativity exhibited by designers. Therefore, this article will only add to the conceptual debate on these issues.
Additional Analysis of the Scope of the Problem
It seems important to try to determine if the problem of boring instruction is limited only to those who use the systematic instructional design approach. Probably every reader has experienced, at one time or another, boring teacher-led or professor-led instruction, as well as boring TV instruction and boring self-instructional materials. Since very little instruction is systematically designed, we can assume that boring instruction can result from nearly any design approach. We should also agree that boring instruction can result from the use of any delivery system—from boring computer-based training questions to talking heads. And, perhaps we should also agree that there is, in fact, a continuum from the most boring of instruction to the most creative of instruction. It is not an “all or nothing ” characteristic.
Does the Use of the ISD Process Result in Boring Instruction?
If designers use a process that learners claim results in boring instruction, then we must accept that fact. But we might want to look again at the process used by the designers and determine if they really used it appropriately. We are all aware that time and financial constraints can restrict what designers actually do on any given project. Compromises are often made, and sometimes they are costly in the end.
When I asked the Masters students about the design of creative instruction, they pointed to three areas in the design process as being critical to this outcome. The first was learner analysis. They indicated that creative instruction is somewhat in the eye of the beholder, in this case, the learners in the target population. The more we learn about them before we design the instruction, the more we can match their interests and concerns in the instruction. If the learners are anonymous, then we lose this tailoring of instruction to their interests.
The second concern of the Masters students was the instructional strategy component of the model. This is the area in which the designer specifies the delivery system, and how the content will be organized and presented to the learners. In the Dick and Carey model of instructional design (Dick & Carey, 1990), it is suggested that designers provide certain introductory information, along with information and practice and feedback for each of the major objectives to be taught. There is a great deal of freedom here in terms of how information will be clumped, and what kinds of practice activities will be used. There is no reason why other theoretical approaches could not be used to formulate the strategy for conveying information to the learner (or for the learners to identify the information that they require). In addition, each of the Masters students mentioned the use of Keller’s (1987) ARCS Model as a means of systematically addressing the motivation of learners. Keller has indicated that motivation has four components which can be included by the designer as part of the instructional strategy: (1) gaining the attention of the learners, (2) demonstrating the relevance of the instruction to the learners, (3) instilling learners with confidence to succeed with the instruction, and (4) deriving satisfaction when they do. The routine use of the ARCS strategy should insure that even the least creative of designers can develop instructional experiences that are engaging to learners. Keller’s ARCS model can be interpreted as a set of specific strategies for designers to use in order to be creative in their designs.
Formative evaluation was the third area of the model that was identified by the Masters students. Their point was that if “creative instruction” is an important characteristic of the instruction to be produced, then an array of questions can be asked during the formative evaluation to determine just how creative the instruction is perceived to be by learners. The learners can be asked such questions as, “Did this instruction hold your interest throughout? Were there places where you lost interest? Did you enjoy the game? What was the most interesting part of the instruction ? How could the instruction by made more interesting?” etc. The answers to these questions can be used to revise and refine the instruction so that it is viewed by learners as being interesting, i.e., not boring.
Two aspects of the design process were not stressed by the students in their answers to my questions: (1) knowledge of the subject-matter domain under consideration, and (2) knowledge of the context in which the to-be-learned information will be used by the learners. Researchers who have investigated the difference between novices and experts in any given field have determined that experts have developed extensive knowledge bases which they draw upon when they are solving problems in their areas of expertise. In a similar fashion, the greater the knowledge the designer has about both his or her own area of expertise and the area in which the instruction is being developed, the greater the Iikelihood that innovative approaches can and will be taken in the instruction. Since designers are often working in subject-matter domains in which they have no formal training, they must rely on subject-matter specialists to provide the content information. This can be a constraint on the creativity of any designer.
Related to subject-matter knowledge is knowledge of the context in which knowledge and skills will be used. Designers have been accused of decontextualizing instruction, i.e., presenting knowledge and skills without sufficient emphasis on examples of situations in which the skills will be used . Most Constructivists stress strategies for encouraging the transfer of skills from the learning context to the performance context. If designers are knowledgeable about the latter context, they can develop instruction that is not only more creative and interesting to the learner, but that also promotes transfer.
In summary, the Masters students said that ISD is not a process that results automatically in boring instruction. Interestingly, motivating instruction can be created by the designer through extensive use of learner analysis, various instructional strategies, and extensive formative evaluation. In addition, the designer’s creativity will be enhanced by broad subject-matter knowledge or access to it, as well as knowledge of the context in which the newly-learned skills will be used.
Do Traditional ISD Evaluation Criteria Result in the Creation of Boring Instruction?
Let’s extend the discussion about the ISD model to determine whether, even when appropriately applied, the model might result in the creation of boring instruction. The traditional criteria used to evaluate systematically designed instruction have been effectiveness and efficiency. The major factor driving an appropriately conducted formative evaluation is the posttest for effectiveness. The posttest is designed to assess the terminal objective for the lesson, and some of the subordinate objectives. The designer carefully summarizes the results of the posttest to identify the objectives that have not been achieved by learners. This information is used to reexamine what is being taught as well as the strategy for teaching it. Changes are made that increase the likelihood of correct performance by future learners. These changes may have nothing to do with how interesting the instruction is.
The second criterion usually applied to instruction, but at a much lower level than effectiveness, is efficiency. Does the instruction provide just what is required in order to master the objectives in the least amount of time? This criterion is usually addressed by the designer in the instructional analysis process, i.e., the process used to analyze the instructional goal in order to identify the subordinate skills that must be included in the instruction. The process is used to identify exactly those skills that are required by the learner to master the terminal objective, and, by inference, to reject skills and knowledge that are not required to achieve mastery. Designers are always admonished to eliminate the “nice to know” so that the instruction will be efficient as well as effective.
Thus, when effectiveness and efficiency are used as the major criteria for the formative evaluation, the characteristics of the instruction that are related to these criteria will get attention during revisions. It should be acknowledged that designers often use questionnaires that ask learners what they like and dislike about instruction, but the predominant use of the data is not to make the instruction more interesting per se, but to make it more effective.
The Creativity Criterion
Those who say that ISD instruction is boring are, in effect, invoking a new criterion to be used when evaluating instruction, namely its “Creativeness.” The Masters students who wrestled with this issue identified several interesting definitions for creative instruction. One said that creative instruction is “instruction that keeps learners motivated while still meeting the objectives of the instruction. “The emphasis here seems to be on keeping learners on task so that they will achieve the objectives. There is the obvious assumption that there will continue to be objectives for the instruction, and we know that many of our critics abhor the use of objectives.
Another Masters student, who has a total quality management orientation, said that creative instruction “engages the learners and goes beyond their expectations.” The focus is clearly on the perceptions of learners and their expectations, and not the expectations of reviewers of the instruction.
The critics identified an area in which ISD is vulnerable, namely, the lack of creativity in the products we design? Should we now add “creativity” to effectiveness and efficiency as the criteria by which our instruction will be evaluated and revised? Before we answer that question, let’s consider what we have learned during the last ten years from the quality movement, and how that might relate to the design of creative instruction.
What’s Quality Got to Do with It?
In recent years, designers have become more aware of the importance of the client’s needs and perceptions regarding any product that is being delivered. Proponents of performance technology stress that instruction is usually only one component of a complex solution to an organizational problem. Performance analysis and needs assessments are becoming more routine as predecessors to the determination that instruction is the solution. Often it is not.
Likewise, from the quaIity movement we have learned that quality is determined not by the attributes of the product but by the satisfaction of the customer. “What the customer wants, the customer gets” is the motto of the quality movement. For designers who work in organizations with a quality orientation, there is a keen awareness of the need to satisfy their clients in terms of producing products that meet and even exceed their expectations.
What does this have to do with the design of instruction? It suggests that we can have our theoretical orientations, and argue back and forth about their validity, but in the “real world ” it is the client who sets the parameters for what will be characterized as an acceptable product. Our client may place a premium on creative instruction, just as our critics insist, or they may be much more into other aspects of the instruction, such as effectiveness and efficiency. Some clients may, in fact, be more interested in where instruction is delivered, and when it is delivered, and to whom. They may be less concerned about either effectiveness or creativity.
It would seem that the most reasonable assumption to make as a designer is that a client will- want instruction that is effective, efficient, and interesting. That third criterion has always been there even if it was only implied. Our critics would have us, quite appropriately, recognize it as an important part of the design process. They would seem to focus almost exclusively on it, though, while I would simply add it to the other criteria, with the understanding that my next client will determine the relative importance of these and other criteria.
Conditions for Producing Creative Instruction
How is creative instruction designed? Let’s begin with the assumption that all of us are more or less creative, and, as the research has shown, there is not a high correlation between intelligence and creativity. It would seem that, based upon what we have learned in recent years, there are at least four major factors that should be addressed. (Note that these are not presented in boxes with procedural arrows. They can be addressed in almost any sequence!)
Client criteria. We must begin with the assumption that a needs assessment has indicated that instruction is part or all of the solution to a problem or opportunity within an organization. In the identification of the characteristics of the solution, the client has indicated that the designer should place a high premium on the design and development of creative instruction, that is, instruction that will be engaging to learners and exceed their expectations.
By establishing this criterion, the client is implying that time and resources can and should be spent on learner analysis, learning context analysis, performance context analysis, examination of a variety of technology-based platforms, and formative evaluation.
Climate supportive of creative approach. It is one thing to espouse creative approaches to instruction, it is another to create a work environment that supports and lets it happen. We have learned that we can’t train people to demonstrate new skills on the job and then ignore the environment in which they are going to use those skills. Managers and supervisors must reinforce the development of creative instruction and reward those who do it best.
Participatory design. The Masters students who responded to the question about the design of creative instruction mentioned the inclusion of instructors and learners on the design team. While this may not be unique to the design of creative instruction, the importance of knowing the characteristics of the target population and the instructors will be critical for creating engaging instruction.
Perhaps one of our problems in the past was that we failed to ask ourselves this question about instruction we designed: “Would I like to study or participate in this instruction?” Even if we thought that we would, we might fall prey to the Fallacy of Self-Projection. This little known axiom states that designers should not make design decisions based upon what they, themselves, would like, or what they think their own children would like. There is almost always a gulf between the designer and the learners. That gulf includes age, education, and experience. The only way to bridge the gulf is to spend time with the learners and their instructors. Enter their world rather that making them enter your own.
Implement technology. With a mandate from the client, support from the management, and the incorporation of learners and instructors on the design team, it is now appropriate to consider the technological opportunities that are available to provide creative, engaging, interesting, motivating instruction.
Not many years ago, the media selection decision was based upon the selection of medium x or y or z. Now, often we have all the media available to us in various multimedia platforms. We still must decide how to present certain information, but it doesn’t have to be at the cost of another medium. A learner can draw on a database, view a film clip, answer some questions, or send an e-mail message to another student.
Those who have designed instruction for a number of years recognize the almost exponential number of alternative ways to think about the design and delivery of instruction. It is nearly impossible to think about a situation in which the designer could fail to create engaging instruction. Certainly the opportunities are there like never before. This point leads us to the consideration of certain issues that will arise in any organization in which there is a desire to develop more creative instruction.
Is There a Down Side to Creativity?
In a recent article, Schultz (1994) discusses the problem of self-deception, a problem akin to self-projection. He describes a situation in which the executives of a company that had recently downsized were encouraging the employees to increase their risk-taking and creativity. He discussed the matter with the executives and asked them to say what first came to their minds when he said the word “creative.” Schutz indicates that their responses were immediate: “beard, sandals, dirty, unreliable, late reports, never there when you want him.” Their responses to “uncreative” were: “neat, on time, prompt reports, reliable.” At the conclusion of the discussion, the executives decided they really didn’t want more creativity.
The purpose of relating Schutz’s experience is not to say that creativity has a negative connotation, but rather to indicate that it is important to be extremely clear about the characteristics of what will be considered creative and what wilI not, and how that relates to the current culture of the organization. If, with certain populations, this is a loaded term, perhaps we should look for more descriptive words, Iike motivational, engaging, appealing, or even unusual.
The second consideration is how to approach the designers who are to become more creative. Our critics would say that their lack of creativity is due to the use of the wrong model of instructional design. However, those that I have talked with speak more of time pressures, resource limitations, the organizational requirement for traditional instructional approaches, and the apparent satisfaction of both clients and learners with the instruction as it is now being delivered. Nothing succeeds like success, and once an approach is successful, it is sometimes very difficult to displace it with an unproved approach. If, on the other hand, present approaches are being severely criticized, then this is the opportunity to try something new.
Does the Linearity of the Design Process Limit Creativity?
We can’t leave the topic of this paper without some consideration of what seem to be the fundamental concerns of our critics. While they may acknowledge the usefulness of certain components of our design model, they are concerned with what they perceive as its linearity, and the consequent limitations on creativity. They interpret the standard ISD model as requiring the designer to begin with the first box on the left and to proceed, lockstep, toward the right from one box to the next, never thinking about the next box until the present one is “completed.” This lockstep approach is seen as severely limiting the designer’s options in the process; it doesn’t allow for moving back and forth among the steps as new information and insights are gained.
My response to this observation is that the model that appears in the Dick and Carey (1990) text was never intended to reflect how instruction is designed in the “real world.” It was created initially as a way of representing the various innovations that were occurring in programmed instruction and curriculum development, and to place those innovations in a sequence in which they would be useful to the user. Over time the model has changed only slightly, and it has been learned initially by thousands of students as a left-to-right, one-step-at-a-time process. It always seemed to me that this is a reasonable strategy for teaching a process to novices.
Few students are very creative on their first use of the model on a project. They focus on how to perform each step in the model, and how to create a product that can be formatively evaluated. Few will ever do it exactly that way again. They will do what works for them, and what is required or permitted by their employer. And the more experience they have at designing instruction, the more effective, the more efficient, and the more creative they will become.
It should be explicitly noted that it is quite demeaning to practicing designers to be accused of creating boring instruction because they can think of and do only one step of the model at a time. This observation implies that some designers have been able to break out of this mental model and are now able to produce creative instruction via unspecified approaches. The rest are left behind, forever locked into an inappropriate linear model. That this argument is taken seriously surprises me. It is a straw man of the first magnitude. I know of no designers who either believe or practice such a rigid approach to their profession. Designers are aware of the need to consider many components of the model at the same time, and in no way can their thinking be described as linear.
How Do Designers Really Do What They Do?
Although I am highly critical of a number of studies in the literature that purport to describe “what designers really do,” I am willing to accept the evidence, such as in Wedman and Tessmer (1993), that indicates that designers do not do all the steps in the typical ISD model. Designers are aware of what steps they do and the ones that they don’t do, and they can give a variety of explanations, such as those listed above about why visitors learning anything? we asked. He still didn’t understand. Attendance at the museum was high. It systems design: Five principles toward a new mindset. designers are not more creative, for their selection of steps. Therefore, I find no evidence in either the literature or my experience that suggests that designers are not creative simply because they blindly follow a linear, ineffective, model.
Will Our Instructional Focus Be Lost?
Perhaps the greatest threat to appropriate application of Constructivism within our profession is the possibility that we will lose our instructional focus; that creativity will replace effectiveness, that having fun will replace learning something, and that we won’t know the difference. How many times recently has the reader encountered some new “instruction” that was engaging, fun, and almost addictive? What happens when the designer asks, what will students know or be able to do when they finish this activity that they couldn’t do before? If there isn’t a good answer, then what we have encountered is probably entertainment.
This concern is perhaps the most important for all designers who consider themselves to have an overwhelming need to be creative. If we find ourselves in situations in which the criteria for evaluating our instruction are re-prioritized so that effectiveness and efficiency are superseded by creativity, we could easily find ourselves creating mere edutainment or infotainment.
The constructions of our critics will likely find a place in our education and training programs, but I am not sure why. They often assume entry behaviors that learners don’t have. They provide practice on skills that have never been mastered or were mastered long ago. They purport to encourage problem solving but offer little evidence of such because of the Constructivist aversion to criterion-referenced testing, that is, the reluctance to establish standards of performance that all learners will achieve. The “assessments” that are utilized are usually in the form of presentations of portfolios of completed activities. No value judgments are made. But, is it educationally worthwhile if the learner is simply engaged, and has fun?
Those designers who adhere to a systematic design approach may be the only employees in an organization who focus on changes in human performance. Many industry savants suggest that, in the future, most companies will have access to the same technology and that it will be very difficult for any given company to gain a significant technological advantage over other companies. What will make a difference for an organization is the competencies of the staff, and particularly their abilities to anticipate and solve problems. Here we are, back in the camp of the Constructivists and their focus on relevance, context, and problem solving.
Are Instructional Design and Constructivism Compatible?
Constructivists have argued that you are either with them or against them. You can’t accept only some of their ideas and reject the rest (Bednar, Cunningham, Duffy, & Perry, 1992). I disagree with this position. We have benefited greatly from the concerns raised by the Constructivists, and the next edition of the Dick and Carey (1996) instructional design text will reflect some of these influences. However, the text still features a fundamental systematic design process.
I would argue that designers who augment ISD fundamentals with judicious use of selected Constructivist principles will make design decisions that result in instruction that is both engaging to learners and produces learning outcomes that are required by the client. What is required is a balanced perspective, and a balanced set of criteria by which we evaluate our efforts. (See Lebow, 1993, for a set of Constructivist values that he suggests might be adopted by instructional designers.)
Design and technology are important for businesses and scientists alike. In a recent statement by the American Association for the Advancement of Science (1990, p. 28) about what all Americans should know about science and technology, there appeared the following statement: “But there is no perfect design. Accommodating one constraint well can often lead to conflict with others. For example, the lightest material may not be the strongest, or the most efficient shape may not be the safest or the most aesthetically pleasing. Therefore, every design problem lends itself to many alternative solutions, depending on what values people place on the various constraints. For example, is strength more desirable than Iightness, and is appearance more important than safety? The task is to arrive at a design that reasonably balances the many trade-offs, with the understanding that no single design is ever simultaneously the safest, the most reliable, the most efficient, the most inexpensive, and so on.”
As a closing comment, let me recount an incident described by Norman (1988, p. 153), who is a psychologist interested in the design of things we encounter every day. He describes his visits to a local science museum at which he observed that visitors try hard, seem to enjoy themselves, but usually miss the point of the displays. “The signs are highly decorative; but they are often poorly lit, difficult to read, and have lots of gushing language with little explanation….! took one of my graduate classes there to observe and comment; we all agreed about the inadequacy of the signs, and, moreover, we had useful suggestions. We met with a museum official and tried to explain what was happening. He didn’t understand. His problems were the cost and durability of the exhibits. ‘Are the visitors learning anything?’ we asked. He still didn’t understand. Attendance at the museum was high. It looked attractive. It had probably won a prize. Why were we wasting his time?” Are designers also wasting the time of the critics?
American Association for the Advancement of Science. (1990). Science for all Americans. New York: Oxford University Press.
Bednar, A. K., Cunningham D., Duffy, T. M., & Perry, D. J. (1992). Theory into practice: How do we link? In T. M.
Duffy & D. H. Jonassen (Eds.), Constructivism and the technology of instruction: A conversation. Hillsdale, NJ: Lawrence Erlbaum Associates.
Dick, W. (1987). A history of instructional design and its impact on educational psychology. In J. A. Glover & R. R. Ronning (Eds.), Historical foundations of educational psychology. New York: Plenum.
Dick, W., & Carey, L. M. (1990). The systematic design of instruction (3rd ed.). New York: HarperCollins.
Dick, W., & Carey, L. M. (1996). The systematic design of instruction (4th ed.) New York: HarperCollins.
Keller , J. M. (1987). Strategies for stimulating the motivation to learn. Performance & Instruction, 26(8), 1- 7.
Lebow, D. (1993 ). Constructivist values for instructional systems design: Five principles toward a new mindset. Educational Technology Research & Development , 41(3), 4-16. Reprinted in B. Seels (Ed.) (1995). Instructional design fundamentals: A reconsideration (pp. 175-187). Englewood Cliffs, NJ: Edwucational Technology Publications.
Norman, D. A. (1988). The design of everyday things. New York: Doubleday Currency.
Rowland, G. (Ed.) (1994). Special issue on designing for human performance. Performance Improvement Quarterly, 7(3).
Schutz, W. (1994 ). Self-deception: The first explanation. Training, 31(9), p. 110.
Wedman, J., & Tessmer, M. (1993). Instructional designers’ decisions and priorities: A survey of design practice. Performance Improvement Quarterly, 6(2), 43- 57.