Editor’s note: This article originally appeared in the August, 1966 issue of Coins2Day.
“Public education is the last great stronghold of the manual trades,” John Henry Martin, superintendent of schools in Mount Vernon, New York, recently told a congressional committee. “In education, the industrial revolution has scarcely begun.”
But begun it has—slowly, to be sure, but irresistibly, and with the most profound consequences for both education and industry. The past year has seen an explosion of interest in the application of electronic technology to education and training. Hardly a week or month goes by without an announcement from some electronics manufacturer or publishing firm that it is entering the “education market” via merger, acquisition, joint venture, or working arrangement (see page 123). And a number of electronics firms have been building substantial capabilities of their own in the education field.
Business has discovered the schools, and neither is likely to be the same again. It may be a bit premature to suggest, as Superintendent Martin does, that “the center of gravity for educational change is moving from the teachers’ college and the superintendent’s office to the corporation executive suite.” But there can be no doubt about the long-term significance of business’ new interest in the education market. The companies now coming into the market have resources—of manpower and talent as well as of capital—far greater than the education market has ever seen before. They have, in addition, a commitment to innovation and an experience in management that is also new to the field.
The romance between business and the schools began when the federal government took on the role of match-maker. Indeed, the new business interest in education is a prime example of Lyndon Johnson’s “creative federalism” at work. Federal purchasing power is being used to create—indeed, almost to invent—a sizable market for new educational materials and technologies. Until now, the stimulus has come mainly from the Department of Defense and the Office of Economic Opportunity. But the Elementary and Secondary Education Act of 1965 provided large federal grants to the schools for the purchase of textbooks, library books, audio-visual equipment, etc. It also greatly expanded the Office of Education’s research-and-development activities and gave it the prerogative, for the first time, to contract with profit-making as well as nonprofit institutions.
The most remarkable characteristic of industry’s invasion of the education market is that it has been accompanied by the affiliation of otherwise unrelated businesses. The electronics companies have felt the need for “software,” i.e., organized informational and educational material, to put into their equipment and have gone in search of such publishing companies as possessed it. Some of the publishing companies, in turn, particularly textbook publishers, have been apprehensive about the long-range future of their media and willingly joined in such auspicious marriages of convenience. As R.C.A.’s Chairman David Sarnoff explained his company’s merger with Random House last May, “They have the software and we have the hardware.”
The fact is that, as far as education is concerned, neither side has either—yet. In time, the application of electronic technology can and will substantially improve the quality of instruction. Experiments with the Edison Responsive Environment Talking Typewriter shown on the cover of this issue suggest that it has great potential for teaching children to read. I.B.M. Has been working on the development of teaching systems since the late 1950’s and is now selling its “IBM 1500 instructional system” to a limited number of educators for research, development, and operational use. But a lot of problems—in hardware as well as software—will have to be solved before the computer finds wide acceptance as a teaching device. No computer manufacturer, for example, has begun to solve the technical problems inherent in building a computer that can respond to spoken orders or correct an essay written in natural language and containing a normal quota of misspellings and grammatical errors—and none has promised it can produce machines at a cost that can compete with conventional modes of instruction.
On the other hand, without the appropriate software, a computerized teaching system results in what computer people call a “GIGO system”—garbage in and garbage out. “The potential value of computer-assisted instruction,” as Dr. Launor F. Carter, senior vice president of System Development Corp., flatly states, “depends on the quality of the instructional material” that goes into it. But the software for a computer-assisted instructional system does not yet exist; indeed, no one yet knows how to go about producing it. The new “education technology industry,” as Professor J. Sterling Livingston of Harvard Business School pointed out at a Defense Department–Office of Education conference in June, “is not being built on any important technology of its own.” On the contrary, it “is being built as a satellite of the Information Technology Industry. It is being built on the technology of information processing, storage, retrieval, transmission, and reduction … by firms whose primary objective is that of supplying information processing and reproduction equipment and services.” And neither these firms, nor the professional educators, nor the scholars studying the learning process know enough about how people learn or how they can be taught to use the computers effectively.
Discovering the questions to be asked
That knowledge is now being developed. The attempts at computer application have dramatized the degree of our ignorance, because the computer, in order to be programed, demands a precision of knowledge about the processes of learning and teaching that the human teacher manages to do without. So far, therefore, the main impact of the computer has been to force a great many people from a great many different disciplines to study the teaching process; they are just beginning to discover what questions have to be asked to develop the theories of learning and of instruction they need.
In time, to be sure, both the hardware and the software problems will be solved, and when they are, the payoff may be large. It will come, however, only to those able to stay the course. And the course will be hard and long—five years, under the most optimistic estimate, and more probably ten or fifteen years. Anyone looking for quick or easy profits would be well advised to drop out now. Indeed, the greatest fear firms like I.B.M. And Xerox express is not that someone may beat them to the market, but that some competitor may rush to market too soon and thereby discredit the whole approach. A number of firms—several with distinguished reputations—did precisely that five years or so ago when they offered shoddy programs to the schools and peddled educationally worthless “teaching machines” and texts door to door.
A lot more is at stake, needless to say, than the fortunes of a few dozen corporations, however large. The new business-government thrust in education, with its apparent commitment to the application of new technologies, is already changing the terms of the debate about the future of American education, creating new options and with them, new priorities. “We have been dealt a new set of cards,” Theodore R. Sizer, dean of Harvard’s Graduate School of Education, has remarked, “and we must learn how to play with them.”
Rarely have U.S. Corporations assumed a role so fraught with danger for the society, as well as for themselves, or so filled with responsibility and opportunity. For over the long run, the new business-government thrust is likely to transform both the organization and the content of education, and through it, the character and shape of American society itself. And the timing could not be more propitious. It is already clear that we have barely scratched the surface of man’s ability to learn, and there is reason to think that we may be on the verge of a quantum jump in learning and in man’s creative use of intellect. Certainly the schools and colleges are caught up in a ferment as great as any experienced since the great experiment of universal education began a century or so ago. Every aspect of education is subject to change: the curriculum, the instruments of education, the techniques and technology of instruction, the organization of the school, the philosophy and goals of education. And every stage and kind of education is bound up in change: nursery schools; elementary and secondary schools, both public and private, secular and parochial; colleges and universities; adult education; vocational training and rehabilitation.
Failure in the ghetto and the suburb
The schools have been in ferment since the postwar era began, with the pace of change accelerating since the early and middle 1950’s. Until fairly recently they were so deluged with the sheer problem of quantity—providing enough teachers, classrooms, textbooks to cope with the numbers of students that had to be admitted—that they had little energy for, or interest in, anything else. And now the pressure of numbers is hitting the high schools and colleges.
It is becoming clearer and clearer, however, that dealing with quantity is the least of it: most of the problems and most of the opportunities confronting the schools grow out of the need for a broad overhaul of public education. For more than a decade, a small band of reformers—among them Jerome Bruner, Jerrold Zacharias, Francis Keppel, John Gardner, Lawrence Cremin, Francis Ianni—have been engaged in an heroic effort to lift the quality and change the direction of public education. Their goal has been to create something the world has never seen and previous generations could not even have imagined: a mass educational system successfully dedicated to the pursuit of intellectual excellence. (See “The Remaking of American Education,” FORTUNE, April, 1961.)
This effort at reform has two main roots. The first, and in many ways most important, has been the recognition—largely forced by the civil-rights movement—that the public schools were failing to provide any sort of education worthy of the name to an intolerably large segment of the population. This failure is not diffused evenly throughout the society; it is concentrated in the rural and urban slums and racial ghettos. The failure is not new; as Lawrence Cremin and others have demonstrated, public education has always had a strong class bias in the U.S., and it has never been as universal or as successful as we have liked to believe. But in the contemporary world the schools’ failure to educate a large proportion of its students has become socially and morally intolerable.
At the same time there has been a growing realization that the schools are failing white middle-class children, too—that all children, white as well as black, “advantaged” as well as “disadvantaged,” can and indeed must learn vastly more than they are now being taught. By the early 1950’s it had become apparent that even in the most privileged suburbs the schools were not teaching enough, and that they were teaching the wrong things and leaving out the right things. Where the schools fell down most abysmally was in their inability to develop a love for learning and their failure to teach youngsters how to learn, to teach them independence of thought, and to train them in the uses of intuition and imagination.
The remaking of American education has taken a number of forms. The most important, by far, has been the drive to reform the curriculum—in Jerrold Zacharias’ metaphor, to supply the schools with “great compositions”—i.e., new courses, complete with texts, films, laboratory equipment, and the like, created by the nation’s leading scholars and educators. This has not meant a return to McGuffey’s Reader or “The Great Books,” however. Quite the contrary; the “explosion of knowledge,” combined with its instant dissemination, has utterly destroyed the old conception of school as the place where a person accumulates most of the knowledge he will need over his lifetime. Much of the knowledge today’s students will need hasn’t been discovered yet, and much of what is now being taught is (or may soon become) obsolete or irrelevant.
What students need most, therefore, is not more information but greater depth of understanding, and greater ability to apply that understanding to new situations as they arise. “A merely well-informed man,” that greatest of modern philosophers, the late Alfred North Whitehead, wrote forty-odd years ago, “is the most useless bore on God’s earth.” Hence the aim of education must be “the acquisition of the art of the utilization of knowledge.”
Reforming the teachers
It has become increasingly apparent, however, that reform of the curriculum, crucial as it is, is too small a peg on which to hang the overhaul of the public school. For one thing, the reformers have found that it is a good deal harder to “get the subject right” than they had ever anticipated. And getting it right doesn’t necessarily get it adopted or well taught. Five years ago Professor Zacharias was confident that with $100 million a year for new courses, texts, films, and the like he could work a revolution in the quality of U.S. Education. Now he’s less confident. “It’s easier to put a man on the moon,” he says, “than to reform the public schools.”
Reform is impeded by the professional educators themselves, whose inertia can hardly be imagined by anyone outside the schools, as well as by the anti-intellectualism of a public more interested in athletics than in the cultivation of the mind. The most important bar to change, however, is the fact that the new curricula, and in particular the new teaching methods, demand so much more of teachers than they can deliver. Some teachers are unwilling to adopt the new courses; the majority simply lack the mastery of subject matter and of approach that the new courses require.
It does no good to reform the curriculum, therefore, without reforming the teachers, and, indeed, the whole process of instruction. Under present methods this process is grossly inefficient. One reason is that so few attempts have been made to improve it in any fundamental way. Without question, the schools would be greatly improved if, as James Bryant Conant and others have suggested, they could attract and retain more teachers who know and like their subjects and who also like to teach. A great deal has been accomplished along these lines in recent years, and the experience suggests some kind of reversal of Gresham’s Law: raising standards seems to attract abler people into the teaching profession. But something more is needed: teachers have to know how to teach—how to teach hostile or unmotivated children as well as the highly motivated. Until recently, however, most of the creative people concerned with education have been convinced that teaching is an art which a person either has or lacks, and which in any case defies precise description.* Hence their failure to study the process of instruction in any scientific or systematic way. (The collection of banalities, trivialities, and misinformation that make up most of the courses in “method” in most teachers’ colleges represents the antithesis of this kind of study.)
Organized to prevent learning
To be sure, teaching—like the practice of medicine—is very much an art, which is to say, it calls for the exercise of talent and creativity. But like medicine, it is also—or should be—a science, for it involves a repertoire of techniques, procedures, and skills that can be systematically studied and described, and therefore transmitted and improved. The great teacher, like the great doctor, is the one who adds creativity and inspiration to that basic repertoire. In large measure, the new interest in the development of electronic teaching technologies stems from the growing conviction that the process of instruction, no less than the process of learning, is in fact susceptible to systematic study and improvement.
Part of the problem, moreover, is that most of the studies of the teaching process that have been conducted until fairly recently have ignored what goes on in the classroom, excluding as “extraneous” such factors as the way the classroom or the school is organized. Yet it is overwhelmingly clear that one of the principal reasons children do not learn is that the schools are organized to facilitate administration rather than learning—to make it easier for teachers and principals to maintain order rather than to make it easier for children to learn. Indeed, to a degree that we are just beginning to appreciate as the result of the writings of such critics as Edgar Z. Friedenberg, John Holt, and Bel Kaufman, schools and classrooms are organized so as to prevent learning or teaching from taking place.
The new concept of intelligence
The solution, however, is not, as impatient (and essentially anti-intellectual) romanticists like Paul Goodman and John Holt seem to advocate, to abolish schools—i.e., to remove the “artificial” institutions and practices we seem to put between the child and his innate desire to learn. To be sure, the most remarkable feat of learning any human ever performs—learning to speak his native tongue—is accomplished, in the main, without any formal instruction. But while every family talks, no family possesses more than a fraction of the knowledge the child must acquire in addition. It would be insane to insist that every child discover that knowledge for himself; the transmission of knowledge—new as well as old—has always been regarded as one of the distinguishing characteristics of human society; and that means, quite simply, that man cannot depend upon a casual process of learning; he must be “educated.”
He not only must be educated; he can be educated—of this there no longer can be any doubt. The studies of the learning process conducted over the past twenty years have made it abundantly clear that those who are not now learning properly—say, the bottom 30 to 50 percent of the public-school population—can in fact learn, and can learn a great deal, if they are properly taught from the beginning. (These studies make it equally clear that those who are learning can learn vastly more.) This proposition grows out of the repudiation of the old concept of fixed or “native” intelligence and its replacement by a new concept of intelligence as something that is itself learned. To be sure, nature does set limits of sorts. But they are very wide limits; precisely what part of his genetic potential an individual uses is determined in good measure by his environment, which is to say, by his experiences.
And the most important experiences are those of early childhood. The richer the experience in these early years, the greater the development of intelligence. As the great Swiss child psychologist Jean Piaget puts it, “the more a child has seen and heard, the more he wants to see and hear.” And the less he has seen and heard, the less he wants—and is able—to see and hear and understand. Hence the growing emphasis on preschool education.
The abandonment of the concept of fixed intelligence requires changes all along the line. The most fundamental is a new concern for individual differences, which Professor Patrick Suppes of Stanford calls “the most important principle of learning as yet unaccepted in the working practice of classroom and subject-matter teaching.” To be sure, educators have been talking about the need to take account of individual differences in learning for at least forty years—but for forty years they’ve been doing virtually nothing about it, in large part because they have lacked both the pedagogy and the technology.
Now, however, the technology is becoming available—and at a time when there is a growing insistence that the schools must take account of individual differences. Indeed, this quest for ways to individualize instruction is emerging as the most important single force for innovation and reform.
In part, the demand grows out of recent research on learning, which has made it clear, as Professor Susan Meyer Markle of U.C.L.A. Has put it, that “individualized instruction is a necessity, not a luxury.” In part, too, the demand stems from the conviction, as Lawrence Cremin puts it, that “any system of universal education is ultimately tested at its margins”—by its ability to educate gifted and handicapped as well as “average” youngsters.
The pressure for individualization of instruction is developing even more strongly as a byproduct of the efforts at desegregation of the public schools. Because of the schools’—and society’s—past failures, Negro children tend to perform below the level of the white students with whom they are mingled. They need a lot of special attention and help in order to overcome past deficits and fulfill their own potential. Few schools are providing this help; most educators are simply overwhelmed by problems for which their training and experience offer no guide. And so they tend to deal with the problem in one of two ways: by ignoring it (in which case either the Negro or the white students, or both, are shortchanged); or by putting the children into homogeneous “ability groups,” in which case they are simply resegregated according to I.Q. Or standardized test scores. Neither approach is likely to be acceptable for very long. The need is for a system of instruction in which all students are seen as special students, and in which, in Lyndon Johnson’s formulation, each is offered all the education that his or her ambition demands and that his or her ability permits.
Corn for the behaving pigeon
Enter the computer! What makes it a potentially important—perhaps revolutionary—educational instrument is precisely the fact that it offers a technology by which, for the first time, instruction really can be geared to the specific abilities, needs, and progress of each individual.
The problem is how. Most of the experimentation with computer-assisted instruction now going on is based, one way or another, on the technique of “programed instruction” developed in the 1950’s by a number of behavioral psychologists, most notably B. F. Skinner of Harvard. Professor Skinner defines learning as a change in behavior, and the essence of his approach is his conviction that any behavior can be produced in any person by “reinforcing,” i.e., rewarding closer and closer approximations to it. It is immaterial what reward is used: food (corn for a pigeon, on which most of Skinner’s experiments have been conducted, or candy for a child), praise, or simply the satisfaction a human being derives from knowing he is right. What is crucial is simply that the desired behavior be appropriately rewarded—and that it be rewarded right away. By using frequent reinforcement of small steps, the theory holds, one can shape any student’s behavior toward any predetermined goal.
To teach a body of material in this way, it is necessary first to define the goal in precise and measurable terms—a task educators normally duck. Then the material must be broken down into a series of small steps—thirty to 100 frames per hour of instruction—and presented in sequence. As a rule, each sequence, or frame, consists of one or more statements, followed by a question the student must answer correctly before proceeding to the next frame. Since the student checks his own answer, the questions necessarily are in a form that can be answered briefly, e.g., by filling in a word, indicating whether a statement is true or false, or by choosing which of, say, four answers is correct. (Most programers have abandoned the use of “teaching machines,” which were simply devices for uncovering the answer and advancing to the next frame. Programs are now usually presented in book form, with answers in a separate column in the margin; the student covers the answers with a ruler or similar device, which he slides down the page as needed.) If the material has been programed correctly—so the theory holds—every student will be able to master it, though some will master it faster than others. If anyone fails to learn, it is the fault of the program, not of the student. Programed instruction, in short, is a teaching technology that purports to be able to teach every student, and at his own pace.
But teach him what? That’s the rub. Most of the applications of programed instruction have been in training courses for industry and the armed forces, where it is relatively easy to define the knowledge or skills to be taught in precise behavioral terms, and where the motivation to learn is quite strong. (One survey of industry’s use of programed instruction indicated that 69 percent of the programs used were “job-oriented.”) It’s a lot harder to specify the “behavior” to be produced, say, by a course in Shakespeare or in American history, and a lot more difficult to sustain the interest of a student whose job or rank does not depend directly on how well he learns the material at hand. And the small steps and the rigidity of the form of presentation and the limitation of response make a degree of boredom inevitable, at least for students with some imagination and creativity.
If programing is used too extensively, moreover, it may prevent the development of intuitive and creative thinking or destroy such thinking when it appears. For one thing, programing instruction seems to force a student into a relatively passive role, whereas most learning theorists agree that no one can really master a concept unless he is forced to express it in his own words or actions and to construct his own applications and examples. It is not yet clear, however, whether this defect is inherent in the concept of programing or is simply a function of its present primitive state of development. A number of researchers are trying to develop programs that present material through sound and pictures as well as print, and require students to give an active response in a variety of ways—e.g., drawing pictures or diagrams, writing whole sentences. Donald Cook, manager of the Xerox education division’s applied-research department, has experimented with programs to teach students how to listen to a symphony. And Professor Richard Crutchfield of the University of California at Berkeley is using programed instruction techniques to try to teach students how to think creatively—how to construct hypotheses, how to use intelligent guessing to check the relevance of the hypotheses, etc.
Teaching by discovery
More important, perhaps, the rigidity of structure that seems to be inherent in programed instruction may imply to students that there is indeed only one approach, one answer; yet what the students may need to learn most is that some questions may have more than one answer—or no answer at all. Programed instruction would appear to be antithetical to the “discovery method” favored by Bruner, Zacharias, and most of the curriculum reformers. This is a technique of inductive teaching through which students discover the fundamental principles and structures of each subject for themselves. Instead of telling students why the American colonists revolted against George III, for example, a history teacher using “the discovery method” would give them a collection of documents from the period and ask them to find the causes themselves.
The conflict between programed instruction and the discovery method may be more apparent than real. At the heart of both (as well as of the “Montessori method”) is a conception of instruction as something teachers do for students rather than to them, for all three methods approach instruction by trying to create an environment that students can manipulate for themselves. The environment may be the step-by-step presentation of information through programed instruction; it may be the source documents on the American revolution that students are asked to read and analyze, but that someone first had to select, arrange, and try out; it may be the assortment of blocks, beads, letters, numbers, etc., of the Montessori kindergarten.
There is general agreement, however, that at the moment, programed instruction can play only a limited role in the schools. Apart from anything else, it is enormously expensive; the cost of constructing a good program runs from $2,000 to $6,000 per student-hour. Because of the costs and the primitive state of the art, Donald Cook believes it inadvisable to try to program an entire school course; programing should be reserved for units of five to fifteen hours of work, teaching specific sets of information or skills that can (or must) be presented in sequence (e.g., multiplication tables or rules of grammar) and whose mastery, as he puts it, offers “a big payoff.” In this way teachers can be relieved of much of the drill that occupies so much classroom time; if students can come to class having mastered certain basic information and skills, teachers and students can conduct class discussions on a much higher level.
When the proper limitations are observed, therefore, programed instruction can be enormously useful, both as a means of individualizing instruction and as a research instrument that can lead to greater understanding of the learning and the teaching processes. It is being used in both these ways at the Oakleaf School in Whitehall, Pennsylvania, just outside Pittsburgh (see the photographs on page 121), where the most elaborate experiment in the development of a system of individualized instruction is being carried out under the direction of Professors Robert Glaser, John Bolvin, and C. M. Lindvall of the University of Pittsburgh’s Learning Research and Development Center.
The uses of feedback
Computers and their associated electronic gadgetry offer ways of remedying some of the obvious defects of programed instruction. For example, programs generally involve only one sense—sight—whereas most learning theorists believe that students learn faster and more easily if several senses are brought into play. Electronic technology makes it possible to do just that. When a youngster presses one of the keys on the Edison Responsive Environment’s Talking Typewriter, the letter appears in print in front of him, while a voice tells him the name of it. When he has learned the alphabet, the machine will tell him—aurally—to type a word; the machine can be programed so that the student can depress only the correct keys, in correct order. And at Patrick Suppes’ Computer-Based Mathematics Laboratory at Stanford University (see photographs, page 120), students using earlier versions of I.B.M.’s new 1500 Computer-Assisted Instructional System receive instructions or information aurally (through prerecorded sound messages) or visually (through photographs, diagrams, or words and sentences that are either projected on a cathode-ray tube or presented in conventional type-written form). Students may respond by typing the answer, by writing on the cathode-ray tube with an electronic “light pen,” or by pushing one of several multiple-choice buttons.
To be sure, the 1500 system is still experimental—wide commercial application is five years away—and much richer and far more flexible “environments” are necessary to make the computer a useful teaching device. But computer manufacturers are confident that they can come up with wholly new kinds of input and output devices.
What makes the computer so exciting—and potentially so significant—is its most characteristic attribute, feedback, i.e., its ability to modify its own operation on the basis of the information fed into it. It is this that opens up the possibility of responding to each student’s performance by modifying the curriculum as he goes along. This couldn’t be done now. Programed instruction currently deals with individual differences in a crude way, chiefly by permitting students to move along as slowly or as rapidly as they can; they still all deal essentially with the same material. But speed of learning is only one relevant dimension of individual differences, and not necessarily the most important. Suppes, among others, is convinced that the best way to improve learning is through “an almost single-minded concentration on individual differences” in the way material is presented to the student.
What this means, in practice, is that a teacher should have a number of different programs at his disposal, since no single strategy of instruction or mode of presentation is likely to work for every student. Second, he should be able to select the most appropriate program for each student on the basis of that student’s current knowledge, past performance, and personality. Third and most important, he should be able to modify the program for each student as he goes along in accordance with what the student knows and doesn’t know, the kinds of materials he finds difficult and the kinds he learns easily. In time it should be practicable to program a computer to assist in all of these functions.
Games students play
Computers lend themselves to the “discovery method” as well as to programed instruction. The exercise of simulating situations and playing games on a computer, for example, can help a student gain insight into a problem by making it possible for him to experiment—and to see the consequences of his (or other people’s) actions in much shorter time than is possible in real life. The computer also imposes a strong discipline on the student, forcing him to analyze a problem in a logically consistent manner, while freeing him from a good deal of time-consuming computation.
The armed forces have been using computer simulation and computer games to teach military strategy, and the American Management Association to teach business strategy. Now, a number of researchers, among them Professor James Coleman of Johns Hopkins, are trying to adapt the technique to the instruction of high-school students. Preliminary results suggest that it may be particularly effective in teaching the so-called “disadvantaged” and “slow learners,” whose motivation to learn in ordinary classroom situations has been destroyed by years of failure.
As with computer-assisted programed instruction, costs will have to come down dramatically, and techniques for addressing the computer in natural language will have to be developed before widespread application is possible. In the meantime the experiments with computer games have led a number of educational researchers to try to develop non-mechanical games of the Monopoly variety for teaching purposes, especially in the social sciences.
Computers are likely to enhance learning in still another way—by increasing both the amount of information students have at their disposal and the speed with which they can get it. In time electronic storage, retrieval, and presentation of information should make it possible for students or scholars working in their local library—ultimately, perhaps, in their own home—to have access to all the books and documents in all the major libraries around the country or the world. A great many technical problems remain to be solved, however, as everyone working on information retrieval knows through hard (and sometimes bitter) experience.
Thoughts in a marrow bone
The biggest obstacle to the introduction of computer-assisted instruction is not technological; it is our ignorance about the process of instruction. Significant progress has been made, however, in identifying what needs to be known before a theory of instruction can be developed. It is clear, for example, that any useful theory must explain the role of language in learning and teaching—including its role in preventing learning. It is language, more than anything else, that distinguishes human from animal learning; only man can deal with the world symbolically and linguistically. But verbalization is not the only way people learn or know, as Jerome Bruner of Harvard emphasizes. We know things “enactively,” which is to say, in our muscles. Children can be very skillful on a seesaw without having any concept of what it is and without being able to represent it by drawing a balance beam (the use of imagery) or by writing Newton’s law of moments (symbolic representation). Present teaching methods, Bruner argues, place too much emphasis on the verbal—a fact he likes to illustrate by quoting these magnificent lines from Yeats:
God guard me from those thoughts men think
In the mind alone;
He that sings a lasting song
Thinks in a marrow-bone
The result is that youngsters too often display great skill in using words that describe words that describe words, with no real feel for, or image of, the concrete phenomenon itself.
Knowing something, moreover, involves at least two distinct processes. The first is memory, the ability to recall the information or concept on demand; and the second is what learning theorists call “transfer,” i.e., the ability not only to retrieve the knowledge that is in the memory but to apply it to a problem or situation that differs from the one in which the information was first acquired. We know somewhat more about memory, and recent discoveries in molecular biology hold the promise of vast gains in our understanding of it and our ability to improve it. (See “Inside the Molecules of the Mind,” FORTUNE, July 1, 1966.)
Most learning theorists, however, believe that transfer is more important than memory, and that the degree of transfer a student develops depends on how, as well as what, he was taught. For transfer involves a number of specific and distinct traits or skills. A person must be able to recognize when a problem is present. He must be able to arrange problems in patterns—to see that each problem is not entirely unique but has at least some elements in common with other problems he has solved in the past. He must have sufficient internal motivation to want to solve the problem, and enough self-discipline to persist in the face of error. He must know how to ask questions and generate hypotheses, and how to use guessing and first approximations to home in on the answer. There is reason to think that these skills can be taught. In any case, we must know far more than we do now about both memory and transfer before we can develop the theory of instruction needed to program computers effectively.
Besides that, we need to know more about how the way material is presented—for example, the sequence, size of steps, order of words—affects learning. And we need to understand how to make children—all children—want to learn. We need to know how to make children coming from “intellectually advantaged” as well as “disadvantaged” homes regard school learning as desirable and pleasurable. The problem is larger than it may seem, for there is a deep strain of anti-intellectualism running through American life. The notion that intellectual activity is effete and effeminate takes hold among boys around the fifth grade, and becomes both deep and widespread in the junior-high years, when youngsters are most susceptible to pressure from their peers. (Curiously enough, the notion that intellectual activity is unfeminine sets in among girls at about the same age.) We need to know how to overcome these widespread cultural attitudes, as well as the emotional and neurological “blocks” that prevent some youngsters from learning at all. And we must understand far better than we now do how different kinds of rewards and punishments affect learning.
Interestingly enough, one of the greatest advantages the computer possesses may well be its impersonality—the fact that it can exhibit infinite patience in the face of error without registering disappointment or disapproval—something no human teacher can ever manage. These qualities may make a machine superior to a teacher in dealing with students who have had a record of academic failure, whether through organic retardation, emotional disturbance, or garden-variety learning blocks. The impersonality of the machine may be useful for average or above-average children as well, since it increases the likelihood that a youngster may decide to learn to please himself rather than to please his parents or teachers. And motivation must become “intrinsic” rather than “extrinsic” if children are to develop their full intellectual capacity.
There is reason to think that we may need a number of theories of learning and instruction. For one thing, the process of learning probably differs according to what it is that is being learned. As the Physical Science Study Committee put it in one of its annual reports, “We have all but forgotten, in recent years, that the verb ‘to learn’ is transitive; there must be some thing or things that the student learns.” Unless that thing seems relevant to a student, he will have little interest in learning it (and he will derive little or no reward from its mastery). In any case, different subjects—or different kinds of students—may require different methods of instruction; a method that works wonderfully well in teaching physics may not work in teaching the social sciences.
More important, perhaps, different kinds of students may require different teaching strategies. It is only too evident that methods that work well with brighter-than-average upper-middle-class families fail dismally when used with children, bright or dull, from a city or rural slum. And differences in income and class are not the only variables; a student’s age, sex, ethnic group, and cultural background all affect the way his mind operates as well as his attitude toward learning. Differences in “cognitive style” may also have to be taken into account—for example, the fact that some people have to see something to understand it, while others seem to learn more easily if they hear it.
What knowledge is worth most?
When adequate theories of instruction have been developed, the new educational-system designers will still have to decide what it is that they want to teach. That decision cannot be made apart from the most fundamental decisions about values and purpose—the values of the society as well as the purpose of education. What we teach reflects, consciously or unconsciously, our concept of the good life, the good man, and the good society. Hence “there is no avoiding the question of purpose,” as Laurence Cremin insists. And given the limited time children spend in school and the growing influence of other educational agencies, there is no avoiding the question of priorities—deciding what knowledge is of most worth.
The answers will be very much affected by the new electronic technologies. Indeed, the computer will probably force a radical reappraisal of educational content as well as educational method, just as the introduction of the printed book did. When knowledge could be stored in books, the amount of information that had to be stored in the human brain (which is to say, committed to memory) was vastly reduced. The “anti-technologists” of antiquity were convinced that the book, by downgrading memory, could produce only a race of imbeciles. “This discovery of yours,” Socrates told the inventor of the alphabet in the Phaedrus, “will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves … They will appear to be omniscient and will generally know nothing.”
The computer will enormously increase the amount of information that can be stored in readily accessible form, thereby reducing once again the amount that has to be committed to memory. It will also drastically alter the role of the teacher. But it will not replace him; as some teaching-machine advocates put it, any teacher who can be replaced by a machine deserves to be. Indeed, the computer will have considerably less effect on teachers than did the book, which destroyed the teacher’s monopoly on knowledge, giving students the power, for the first time, to learn in private—and to learn as much as, or more than, their masters. The teaching technologies under development will change the teacher’s role and function rather than diminish his importance.
Far from dehumanizing the learning process, in fact, computers and other electronic and mechanical aids are likely to increase the contact between students and teachers. By taking over much—perhaps most—of the rote and drill that now occupy teachers’ time, the new technological devices will free teachers to do the kinds of things only human beings can do, playing the role of catalyst in group discussions and spending far more time working with students individually or in small groups. In short, the teacher will become a diagnostician, tutor, and Socratic leader rather than a drillmaster—the role he or she is usually forced to play today.
The decentralization of knowledge
In the long run, moreover, the new information and teaching technologies will greatly accelerate the decentralization of knowledge and of education that began with the book. Because of television and the mass media, not to mention the incredible proliferation of education and training courses conducted by business firms and the armed forces, the schools are already beginning to lose their copyright on the word education. We are, as Cremin demonstrated in The Genius of American Education, returning to the classic Platonic and Jeffersonian concepts of education as a process carried on by the citizen’s participation in the life of his community. At the very least, the schools will have to take account of the fact that students learn outside school as well as (and perhaps as much as) in school. Schools will, in consequence, have to start concentrating on the things they can teach best.
New pedagogies and new technologies will drastically alter the internal organization of the school as well as its relation to other educational institutions. Present methods of grouping a school population by grade and class, and present methods of organization within the individual classroom, are incompatible with any real emphasis on individual differences in learning. In the short run, this incompatibility may tend to defeat efforts to individualize instruction. But in the long run, the methods of school and classroom organization will have to accommodate themselves to what education will demand.
In the end, what education will demand will depend on what Americans, as a society, demand of it—which is to say, on the value we place on knowledge and its development. The potential seems clear enough. From the standpoint of what people are already capable of learning, we are all “culturally deprived”—and new knowledge about learning and new teaching technologies will expand our capacity to learn by several orders of magnitude. “Our chief want in life,” Emerson wrote, “is someone who will make us do what we can.”
*There is nothing wrong with the American school system, William James declared some sixty-seven years ago, in a view still being echoed in academic circles, that could not be cured by “impregnating it with genius.” But genius, by definition, is always in short supply.








