Human Enhancement Through Evolutionary Technology

The first part of another influential Brodey article (co-written with Nilo Lindgren). It argues that human enhancement is an important objective to strive towards in designing computer systems and interfaces.

The coming widespread availability of computational power or “distributed intelligence” could open the door to a new kind of “interfacing in depth” between men and machines. Engineers might begin designing evolutionary artifacts aimed at an enhancement of man's control skills and perceptions.

The thrust of this article is this: There is a need now, more than ever before, for men to stretch their capacities in what we shall call evolutionary skills. Moreover, it is at last becoming possible technologically to enhance these skills in man by incorporating somewhat similar evolutionary skills in the machines which we design and build. However, if engineers are to develop machines with evolutionary capabilities, they will need to restructure their own way of thinking, throw out traditional ways of thinking, and find their way, through playing with evolutionary design techniques, into an ever-deepening understanding of the significance of such techniques. They must bootstrap themselves into a new kind of “think,” into a new climate of man-machine interaction, in which men evolve intelligent machines and intelligent machines evolve men. This new kind of think is what this article tries to unfold in an effort to spur lively support for the evolutionary direction.

It should be clear from the outset that a widespread technology of artificial intelligence, upon which the argument of this article depends, does not yet exist. Some readers will hold that it is wrong or premature to extend new promises and proffer new uses of intelligent machines when the field is still littered with the disappointments and the disparities of past promises and present performances. But if intelligent machines and an evolutionary technology are to come into widespread use, there must also arise a widespread realization of how that technology might profitably be used. To reinforce the demand for the technology, we need a spirited and practical image of the ways in which it is needed.

“Human enhancement,” as we argue in this article, is one way. It is a way of involving the human in the evolving technology. It is a way of breaking the paradox: “You don’t get the technology until you have the demand; and you don’t get the demand until you have the technology.” Both the demand and the technology must evolve hand in hand, through a real-life dialogue. One way to get this dialogue going in an evolutionary direction is to do a bit of skating on thin ice. That we shall do.


Times change, and the works that men do change. Through invention, and the evolution of inventions, man has continually modified and worked shifts in his environment. Each generation adds its creations to what came before. Field becomes farm, logs become wheels, rocks become buildings, the shortening and lengthening shadows cast by the sun become time. Each of these transformations dawns in the mind of man in the form of concepts, ideas clenched in the mind “like a fist in your hand.”[1] Man envisions his world in the light of his own works...“the pastures of heaven,” the great “wheel of the universe,” the “house of the soul,” the “desire for immortality.” And as man’s invented artifacts give way to new inventions, his conceptions of the world give way to new concepts. Yesterday’s truths become today’s cliches, the mental junk and obsolescent concepts that need to be continually cleaned out to make way for new truths and new concepts. In the Western World, science is born, and psychology springs from Aristotelian analyses. As Eilhard von Domarus tells us in “The Logical Structure of Mind,”[2] it was Aristotle “who made possible the distinction between the sciences of mind and of matter”—two branches of science that have been separated ever since. In our own epoch, man conceptualizes eco-lution, and begins to examine the deep laws of life whereby the past has passed into the present. The farms have become highways, the wheels have become automobiles, the buildings have become laboratories, and time has become relative. And only in our day do the scientists of matter and of mind attempt to bring the two houses of science together again, to crown these fantastic edifices of knowledge that have been two thousand years in the making.

But with what success? It begins to appear that the logic that applies to the analysis of inanimate matter, a two-valued logic of true-and-false, and the chains of cause-and-effect, do not aptly model the operations of living beings. The description of life requires more than a logic that can be derived from “truth tables.”


It appears, too, that inadequate attention to the process of cleaning out mental junk, of unlearning obsolescent concepts, hinders the evolution of new concepts. Yet, we don’t even know how to think our way into the problem of how we “unlearn.” Our newest computational tools, however, invite us to reconsider basic premises about how we learn to learn.

In tracing the evolution of man’s inventions in his book for Everyman,[3] Norbert Wiener, the creator of our legacy of cybernetics, notes that “the art of invention is conditioned by the existing means.” Today, as we well know, the “existing means” have changed radically from the means of just a generation ago. The dreams of our predecessors, and their purposes, have become our /'acts. Many of the potentials that Norbert Wiener put forward only speculatively are already here. Many of his conceptions exist as our hardware, and these new tools are getting better at a rate he more or less predicted. His book, published in 1950, although it already reads like a “period piece,” can now be profitably re-evaluated by the community of engineers as a source book of ideas of how to make practical use of our new computational and technical skills. The title of his book, The Human Use of Human Beings, still lingers as a call to action rather than denoting an accomplished fact.


To do what? To create, through the emerging means, through distributed “artificial intelligence,” an environment more consonant with the real needs of man. There will soon be a computer available at the end of each telephone circuit that could be used to help prevent us from being carried beyond our human powers to manage an environment increasingly dominated by unintelligent machines governed by essentially nonhuman principles. We need an environment, which is more and more made by us, to have more of our kind of intelligence and our kind of behavior. But how can that be done, and why should it be done?

In his inaugural address as President of M.l.T. this past October, Howard W. Johnson affirmed that Institute’s concern with the vigorous current of change that modern technology is producing. We cannot produce students who are, as in Kafka’s words, like “couriers who hurry about the world, shouting to each other messages that have become meaningless.” It is difficult to see, he stressed, how the evolving professional community can be without an “understanding of [both] the physical and biological world.”[4] Furthermore, he quoted President Kennedy in stressing that “the real problem of our century is the management of an industrial society.” Can engineers use their technological skill to refine what has been seat-of-the-pants intuition? How can they assist in and clarify the tasks of managing an industrial society?

From another quarter of our social organization come similar sentiments: “Developing our human capabilities to the fullest is what ultimately matters most. Call it humanism—or whatever—but that is clearly what education in the final analysis is all about.”[5] The speaker is Secretary of Defense, Robert S. McNamara. Again, there is the practical question: how?

Perhaps, as we have already suggested, our present technology already contains within it the kernel of the answer to all these questions. Some engineers and others believe that technology cannot find solutions to the social displacements caused by technology. However, the reverse might be the case. The solution might come only through the technology.


Our argument is relatively straightforward. It revolves around the historical fact that man has tampered with natural evolution to a spectacular degree. Man has been so successful in his efforts to control his physical environment that he has usurped nature’s role in maintaining a kind of balance among all its parts. Humankind has altered the natural ecology and has started to organize things in its own way. Man, instead of being a subsidiary animal in the grand design, has become, for all apparent purposes, the driving element in the natural system. But the trouble is that man has not yet become such an accomplished systems engineer that he can master and maintain a more or less stable planetary ecology on his own terms. There are insistent signs that man. through his great engineering works and his technology, threatens to throw the naturally balanced system into a violent instability. The air and waters of the planet are being rapidly poisoned, many resources are being depleted, the available space is rapidly being occupied by man and his inventions, many unfortunate men are at wit’s end, and so on. Thus, there is a pressing need, not just for conservation, but for a new level of stability and control in this dangerous evolutionary trend.


On the other side of this issue is the fact of man’s own existence, and whether or not he has liberated himself through his efforts at control over his environment. There is the question of whether, in his increasing development of automatic machines, man has not also automated himself, and seriously reduced the potential variety and richness in his own life endowed to him through his biology, through the gifts of millennial evolution.


It is our assumption that all of the suspicions put forward in the foregoing paragraphs are manifestly true. Our massively successful technology, which was supposed to have provided our salvation, has brought us into deep trouble. We postulate that this technology must be modified in a dramatic fashion, that our machines must be provided with evolutionary powers, with some intelligence more like our own.

We shall have some fairly specific strategies to put forward that touch on many levels of the same problem. Moreover, our ideas are aimed at engineers, for they are as cognizant as anyone about how much machines now control our environment. Furthermore, through their excellent work in the past, and through the work they are doing at present, engineers have brought within reach the possibility of endowing machines with evolutionary skills, which should not only bring about an enhanced technological effectiveness but human enhancement as well.

However, despite their awareness of their machines and the nature of physical control, there is a serious question as to whether or not engineers have properly conceptualized the breadth of the effects that machines have had on human life. Machines could be set up and designed so as to teach their users how to use them more expertly, so as to enhance both the control and conceptualizing skills of their users, so as to satisfy the user’s own personal needs and his own personal style, rather than, as it is now, so as to reduce individuals to a stupefied norm. What we are after, in engineers, is a new respect for the capacity of our new control technology to serve the individual and his individual variations. A regard for individual variations, in the designing of new machines, is necessary for the evolutionary process.

The notion of evolutionary skill has many ramifications. Both man’s control and conceptualizing skills are more culturally determined than we ordinarily realize, and are not being used to their fullest potential. Furthermore, these skills, controlling and conceiving, are less separable from one another in an individual than the separate terms suggest. One begins to gain control as he conceives that the possibility of control exists. We are only now becoming aware of the need for personalized environmental management. This brings a new perspective: a man is constantly changing, either growing or decaying, just as his environment is constantly changing. Exactly how a man grows and changes, how he evolves in his powers of control and communication in relation to a changing environment, must be analyzed and described through some formal means. As yet, no such formal or scientific description exists, but it now appears possible, through the use of modern tools such as the computer, to begin to develop such a formal description, and to begin to explore man’s potential for the enrichment of his control and conceptualizing skills.

But we won’t know exactly how to approach this new description of man until we perceive how man has learned over the generations to solve real survival questions, often without awareness. Our methodology, our approach to the restoration of natural-like ecological controls, must grow out of data from evolutionary real-life situations. Such data must emerge from a “dialogue,” a kind of interfacing in depth, between man and his new machines.


With physical systems of the order of complexity of a man, or with large systems made up of many men, with systems as large as our human society, which are now composed of complex aggregates of men and machines, it appears no longer possible to analyze or simulate the behavior and systematic requirements through traditional modes. Their operations and functions, involving multifoliate nonlinear feedbacks and interactions, are far too rich for the usual descriptions we apply to physical systems. Units of measure for functional controls relevant to a particular purpose are fundamentally different from the units of measure ordinarily used for describing the actual construction of a system. With truly complex systems, one seeks out simplicities of behavior rather than simplicities of construction, because such systems have complex choice patterns with which to stabilize themselves in relation to dynamic environments. The problem is to find those measures that allow one to simplify the necessary control behaviors.

But up until now we have not sought a formal methodology for finding such measures. Thus, to begin the construction of evolutionary systems, it may be necessary for us to try them out, to build the physical systems so that they can “evolve” through real time in real-life situations. It is quite possible that through such evolutionary designs, new types of systemic “simplicities” will be discovered that ordinary analysis would not make evident, or that are not apparent in the complicated aggregates of smaller systems of the kind that engineers have been studying up till now.


The introduction of an evolutionary system into a real-user situation is colored by a difficult question that will affect any organization’s deliberate decision to move toward the incorporation of such systems into real-time operations in which the usual daily activities continue.

The question is how to justify the cost of an apparatus or procedure whose functions and virtues in terms of the purposes of the organization cannot be wholly defined in advance, but where it is a reasonable gamble that the “unexpected” will be profitable. For instance, many organizations have been using conventional computers, but they have no way of knowing whether or not they need or could use the more expensive on-line time-sharing systems now being evolved, since they have had no experience with such systems. The problem then for the person who believes in the real value of such a system is to get the potential users involved in it, to get them to grow with it as the machine-software combination is evolved to their purposes and style. If the users become involved in a prototype scheme of the system that is capable of being evolved in its usages, then the procedures of the humans change along with changes in the procedures of the machine. But the allocation of many of the costs in such an evolving system, in which the user and software procedures are undergoing “tuning” to one another, cannot be stipulated in advance. Despite the difficulties of incorporating evolutionary systems in real-life situations, it should be evident that this is the only way their true worth can be discovered. A prototype must have sufficient complexity to begin the evolutionary process and sufficient flexibility so as not to preclude unexpected possibilities or benefits. Much of the physical system can be specified in advance, of course, as can be the system software (the available programs), but the users will not know beforehand, in depth, all the things that they will be able to do with it.

Also, with very complex machines, if the machine does not help the user by evolving and enhancing his initial capacity to control it, he may simply reject it as being useless; and he may continue to use, at great cost, obsolescent and perhaps even dangerous machinery with which he is familiar.


From an engineering point of view, it is rational to ask at what point systems become so complex that traditional methods of attack become inadequate. It is said that the dividing line, where the capacity either to analyze or simulate a system breaks down, is somewhere between the complexity of a supersonic transport and a huge computer network. The flight dynamics of the SST can still be simulated, but when you go to an information network with many users, the simulation becomes meaningless. Somewhere between these orders of complexity, traditional methods will break down completely. Perhaps with telephone systems, certainly with large time-sharing computer systems, on out to large sociological units, you have passed a break point after which you must go to a new methodology, to an evolutionary method of attacking the system problems.

However, our interest and emphasis in evolutionary design, although it has something to do with the “practical problems” of gigantic systems, is not focused on such questions. Our interest in evolutionary machines is based on a concern for what has been happening to the human users of machines, what is now happening to them, and what is likely to happen. We see evolutionary machines of all kinds, large and small, as large as time-shared computer systems or as small as chairs, as a prerequisite for what we shall call “human enhancement.”

It is precisely this quality, built in advance into a system, of man and machine being able to evolve each other, that we consider vital to solving the problems of technical pollution discussed earlier.


In effect, we are saying that our present tradition of science and technology, the physical science built up so manifoldly of sequential cause, then effect, relationships, has brought us to a kind of dead end. Something radically new is wanted. More refinements of cause-and-effect, stimulus-response models, or more aggregates of such models in complicated systems, are not likely to lead to any real amelioration of the technological pollution.

For instance, highway engineers design and build big new highways to alleviate existing patterns of traffic congestion. They pinpoint the bottlenecks existing before the new highway and attempt to bypass them. But the construction of the new highways and bypasses, causing displacements and disruption to humans and animals through the leveling of trees, individual dwellings, farms —all this destruction and construction is barely complete before the new highway itself becomes obsolete. Change evolves change even if we blithely deny the need to research the process.

If, as Oliver Selfridge of the M.I.T. Lincoln Laboratory suggests, these problems cannot be left in the hands of traffic engineers alone, then who are the people with a broader grasp? The governors of the states? They too are hampered by legal codes and political structures that are also obsolete with respect to their capacity to respond appropriately to the massive social effects of technology. The tendency in the highly developed countries, such as the United States, is to look to the highest levels of the government for solutions to the problems manifested at apparently local levels. But even at the highest levels of government, there exists an uncertainty. We do not know where to allocate decision skills that can effectively increase our responsiveness to the social ills caused by technology. Something new is needed.


To get moving toward this “something new,” we must begin to shake ourselves out of the old. This is not easy. It is not even possible to gauge how deeply our classical concepts are rooted, until after we have adopted the evolutionary viewpoint that regards information as continuously being evolved from the unknown, metabolized into meaning, and finally recontexted into noise. Truths, while still true, become irrelevant. Man survives as a creature who continually changes and evolves, a creature who feeds on novelty, who reorganizes himself as he reorganizes his physical world and maintains stability by this process of change. It is not easy to adopt the evolutionary viewpoint, or to bring it to bear relevantly in engineering work. The old Greek way of simplifying the physical world into timeless true false statements is what we have cut our conceptual teeth on. New information or insights we receive, any novelty we detect, we will automatically try to structure and fit into our present conceptual framework, so that we must suffer the frustrating effort of trying to “see” something outside the framework as though it existed within the framework. When we cannot make the fit, the world seems out of control and absurd, but it is our old Greek concepts that are absurd. We haveinformal ways of getting around these absurdities; but they are not codified for ready use or teaching.


Man’s irrepressible need to explain away or to fit new experiences into his existing conceptual framework often enough leads him into making comic connections, one of the most delightful of which is mentioned by Freud: “On one occasion during a sitting of the French Chamber a bomb thrown by an anarchist exploded in the Chamber itself and Dupuy subdued the consequent panic with the courageous words: 'La seance continue.' The visitors in the gallery were asked to give their impressions as witnesses of the outrage. Among them were two men from the provinces. One of these said that it was true that he had heard a detonation at the close of one of the speeches but had assumed that it was a parliamentary usage to fire a shot each time a speaker sat down. The second one, who had probably already heard several speeches, had come to the same conclusion, except that he supposed that a shot was only fired as a tribute to a particularly successful speech.”[6]

Somewhat less amusing, but revealing nonetheless, are the kinds of “in” jokes perpetrated by students of engineering and science, who find it funny to talk about the “real” world in terms of the equations and physical laws they are learning in their academic courses. The humor lies in the fact that “everyone knows” that these formulas are absurdly far from explaining the real world as they already know it from their experience. But give these engineering students a few more years of exposure to these technical formulations, and the constrained world within which they apply, and the jokes lose their luster. It is hard to maintain perspective.

The narrower perspective can be justified so long as we attack limited-scale physical problems- small devices and systems. But it limits the invention of those small changes in the multitude of small things that would allow us more freedom of behavior. Rheostat light switches, for example, allow us more freedom than simple on oil' switches. But imagine home lighting controlled, as we suggest, by distributed artificial intelligence or computational power. Can you imagine lighting designed to help you see what you care about—and are beginning to care about? Every electrical outlet (distributed electric power) can be an input or output for computational power. This imminent availability of “intelligence” leads to radical new engineering design concepts. To use this new resource to the advantage of man, it will be necessary to move on from obsolete concepts—those that regard computers as “things” rather than as functional and personalized control distributing systems. To design effectively in this new mode, engineers must move to a consideration of what a man actually is, not what they would like him to be. To allow simplicity of operation while personalizing a whole system’s utility, engineers must discover man’s real style of learning.


But what is still art and not yet science is how to renew discovery when a description has become irrelevant to our purposes. The cause-and-eifect model does not tease out the small change that, when lifted to attention, unfolds a freshened perspective and a new control point.

Each way of structuring or modeling a phenomenon organizes the informational input to correspond to the model. Each epistemological structure, each pattern recognition system, is, in the most basic sense, a code that acts as a carrier making certain kinds of relationships translucent, but at the same time the structure excludes the entertainment of certain other kinds of relationships.

In the year 1948, a great step forward to modeling was made through the creation of the mathematical theory of information by Shannon and Weaver.[7] Their model created many new questions and answers. Their whole concept of information, the language and truths popular today among information engineers, was designed to help decide how to quantify information flow along a channel.

It was a structured model well designed to its purpose. But the range of possible messages and codes and the nature of the channel were all predetermined. The game was played like a game, with the deck of the cards and the possible moves known. Which move would be made remained unknown. Probabilities could be specified and novelty determined. The meaning or effect of the message on the receiver and the effect of his responses were to be studied using the formal methods developed by Shannon and Weaver. In point of fact, feedback, complexity,and context were omitted from consideration. Limits were set that would generate a truth that might be used, for instance, to establish telegraph charges and such. However, it is significant that Weaver’s essays did suggest that the theory would grow into a wider framework important to understanding meaning and effectiveness.

The consequence of this kind of model that held to the value structure of earlier science has been that the vastly improved communication machinery that has grown out of man’s tinkering with physical nature still only passively connects with its human user. The trouble is that we automatically expect to be organized by our machines. They are worth money. We gave up our humanness to the industrial revolution for the sake of “progress.” In the early stages of the evolution of machinery not much else was possible.

We need no longer live in the Chaplinesque world of “Modern Times,” in which our actions are driven and conditioned and shaped by the speed and size and character of the machine. With the coming generation of intelligent machines we should begin to be able to develop with them software designed to change the man -machine action that anticipates danger and that allows us to feel our way into a better mode of being with the machine. As the machine is given more of our kind of intelligence, it will also be able to help us to use its intelligence and power more efficiently. Its knobs, for instance, might turn more easily to a given man’s hand when the machine and man are producing a more optimal product.

There is as yet no science of how men and machines might learn and grow together. We believe it will be easier to develop this science than to measure man alone or in his interaction with another human, and that it must be carried forward in an evolutionary way, through the process of “dialogue.”[1]


Dialogue has to do with how people “track” one another in learning novel views, in undoing structural obsolescence (in both skills and concepts); it is a kind of tracking that may exist not only between man and man but between man and machine as well.

Dialogue is the contact of two or more people who do not engage in a sequential cause effect discussion, in which in turns one person speaks while the other listens. It is rather an animated speaking at once, with whole body involvement—with hands, eyes, mouth, facial expressions—using many channels simultaneously, but rhythmized into a harmoniously simple exchange. And these exchanges hold their consistency by shifting the encoding they use in terms of the decoding system which they predict will put into context and give meaning and effectiveness to the messages sent. In the real world of moment-to-moment exchanges, the meaning of our words is governed by the total context in which they are uttered. We grasp meanings through a consideration, or an awareness, of the total context. McCulloch and Brodey make the contrast, for instance, that although, thanks to the work of Noam Chomsky, the analysis of context-free, phase-structured language has been reduced to an exercise in group theory, no natural language is ever context-free, even when it is carefully written. In real dialogue, they say, “the context often carries most of the information. One has only to tape-record the dialogue to discover that a large fraction of the sentences are never finished, nor need be, for the meaning has already been transmitted.”[1]

There is, too, in this printed context, much that is implicit. However, as a dialogue, this exchange of information and concept is very seriously limited by the medium, even if you respond with letters and questions. It is a long-distance exchange, with terrific time delays between sender and receiver. Although it has certain advantages, it is a rather tenuous one-dimensional dialogue.

Let us then take the example of two people talking together. One is trying to explain something to the other. He starts by throwing out an explanation, constantly assessing by the other’s expression whether or not anything is getting across. The listener may be obviously puzzled or may show a glimmer of understanding. He is listening for something in the explanation that sounds like something he already knows, so that he can link it up conceptually with a structured inner map that he already has organized. By his expressions, by his gestures, by his questions, he reveals to his interlocuter or teacher what he does not understand. By his “errors,” by his lack of comprehension, by his verbal responses, the teacher judges how “far off” he is, and he takes new tacks, thinks of new analogies, and so on. Everyone of us has had this experience of learning a new concept, a new idea, stretching out and changing a point of view we already hold; and everyone of us has had the experience of trying to explain something to someone else. Intuitively, we know what this process is like, but we have no formal language to describe and to predict how it occurs. It is still an art.


The learning of bodily control skills is likewise still an art, and likewise a dialogue. A student learning to play tennis, for instance, listens to his teacher’s instructions, watches him perform movements with the racket, attempts similar movements himself, is corrected by the teacher who watches what he does wrong, and so on. The merest facial expression on the part of the instructor verifies to the student how far off he is. Finally, at some point (the instructor may “up” the power of his play against the student, forcing the student into a ballistic movement he had not anticipated), the student suddenly follows through a movement or a maneuver correctly, with an economy of effort, and simultaneously he acquires a feeling and a concept of why it is correct. He is on the verge of acquiring a new control skill; he has discovered how to “program” himself to carry out the control skill. We believe that machines can be taught to function as this tennis pro- teaching by just the right action, well timed in terms of the learner’s readiness to acquire a previously unknown knack.

From these descriptions, which appeal to our common experience, we shall take off to make a number of points and speculations. First of all, the learning of the control skill, as in the tennis example, is but one program out of a tremendous potential repertory of such programs that an individual could learn. That is, the biological “hardware,” if you will, could be programmed in many different ways, and in ways which, in the beginning, might even seem unnatural to any individual. Though the styles of behavior we learn in our immediate families are rich in variety and though, in turn, the software and hardware programs that are indigenous to a specific culture are also rich in variety, yet we are already programmed to a relatively limited repertory of human possibilities, which we tend to carry out in automatic fashion. We become unaware that we have made choices when we originally learned what we later consider beyond decision. We know, for instance, that persons in the same family tend to walk alike, talk alike, and exhibit specific codes of facial expressions. The children growing up tend to learn the meaning of these physical expressive codes even before they learn the linguistic codes of the adults. And they become, in the old behavioral sense, conditioned. Some children who are not skillfull at unlearning live to justify what they were taught when they were younger; they tend to carry out the same automatic programs. We all get into iterative behavioral and conceptual loops that are hard to break out of without outside assistance or interference. This is particularly sad when it is the development of nascent perceptual and motor skills that we have not learned to use because the previous generation had found no means to use them. Alas, the rate of change of our environment, owing to the escalating success of technology, demands an ever-higher responsiveness on the part of those who attempt to manage the change as well as those who merely try to adapt to it as best they can. The slowing of children’s learning to an adult teacher’s polite pace is no longer advantageous. The manual workers, who acquired a fairly narrow repertory of skills, were the first ones to be threatened with obsolescence, but now even the clerical and conceptual workers are being overtaken by their technology. The refreshing creativeness of children must be allowed to reap its fruit in enriched variety of styles and interests and ways of knowing. The old kind of standardization has lost its utility.


What has all that to do with dialogue? Imagine, if you can or will, a machine that is as responsive to you as our postulated tennis teacher—a machine that tracks your behavior, that attempts to teach you a new control skill or a new conceptual skill and gives you cues as to what you are doing wrong. Furthermore, the machine gauges how far off your actions are from the program you are trying to learn, and “knows” the state of your perception; it is able to “drive” your perception gradually and sensitively, pushing you into unknown territory, into making you feel somewhat absurd and awkward just as you do when you are learning those new tennis movements. Suppose, in fact, this machine could sense factors about you that even a human instructor would miss— how your heart rate was changing its acceleration, how your temperature was rising or falling, how the acid production of your stomach was beginning to increase, or how your eyes were actually tracking during certain tasks. If the machine could use these “sensory” inputs in an intelligent fashion, it could be even more responsive to our needs and problems than the tennis instructor. In other words, this supposed machine would functionally be what we call a “gifted teacher.” This machine would be behaving, in fact, like a deeply perceptive wise man who can behave in such a manner as to drive us out of our resistances to learning new patterns of behavior. He would be “tracking” us in the complex of our physiological and mental behavior. And he would not only be tracking, but he would also be deftly pushing, rhyth-mizing his interventions to our “natural” time scale so as not to push us over into radical instability. This wise friend would not be reading out to us archaic laws, set in a language that is irrelevent to our needs and purposes (that would be just a smart friend). He would be sensitively following our natural responses, building them by gentling their cadence just beyond the pace on which they evolved a moment before, and through this guidance, he would enhance what we could see and feel and do. What was mere noise or disorder or distraction before becomes pattern and sense, information has been metabolized out of noise, and obsolete patterns have been discarded. The man who helps us sense our wisdom we call wise.


Granted, such a remarkable machine does not exist (except as a twinkle in the imaginative eye of a father who only that, but all men who manage machines must manage them in more or less the same way; all men are constrained to be “average” men vis-a-vis the machines.

Moreover, as our machine systems grow more complex, stretching their wires and tentacles throughout the fabric of our human society, the danger of their carrying us out of control becomes more magnified. Regional power failures make us aware of our dependence on machines and, according to the news, of our joy at their embarrassment. The danger of machine-like decisions being made by the aggregates of existing machines, made through a modal logic which is not our logic, persists. For instance, the Internal Revenue Service simplifies our affairs to meet ifs programmer’s problems. We require large systems with which we can engage in humanlike dialogue, of the rich kind that occurs between people. Our entire machine environment needs to be given a self-organizing capability that is similar to the self-organizing capability of men,[8] so that both kinds of systems can evolve and survive over the long run. Coexistence is better than the slavery to the stupid machines that is accepted now.

But can sensitive capabilities be given to machines? Will it be possible to create a more intelligent and more responsive environment? Or are these merely fanciful and empty wishes? No. Work is already beginning, and we shall cite some examples in a subsequent article.


We should summarize in a little more technical fashion some of the characteristics of dialogue systems: (1) A dialogue has the capacity to draw its participants beyond the sum of their action or intent. It evolves them. (2) The dialogue occurs when the two or more systems (e.g., persons) begin playing each other’s transitional states simultaneously. They predict and hold a high level of what will be novel, given these predictions. Imagine yourself with a well-matched friend. You will also try to keep fresh and unexpected information building if you are close. You will drive the dialogue almost to the point where you are not sure that there is understanding until you test. Both parties push their individual codes just to the edge where there is just enough common coding to comprehend one another if their “prediction” is right. (3) Each participant uses less ambiguity when he perceives that such a reduction is needed (either because the other person is obviously not understanding or because there are environmental distractions—the time delay before such correcting is itself a code). (4) Error correction and an evolving purpose are used to control the conversation and allow the conversation to develop. (5) As the dialogue drives its participants, the self-regenerating power organizes its components even as the whole system changes, and some components waver on the limits of instability where the lack of prediction and the delicacy of balance allow what has been noise to become organized as a controller. Noise acts on the system when it is easily perturbed and the resultant shift reflects this effect, and what happens becomes information (thus, for instance, when a person is irritated or abnormally disturbed, he does things that do not follow his “normal pattern,” and gives the other person an insight into his underlying operating codes). The system must be time-phased. It adapts to environmental change in shorter and longer intervals, the variance in inertia preventing fragmentation. (6) Automatic error correction allows the system to remain within required limits for smoothly evolving, giving dialogue a purpose. The dialogue of seminar learning has a different purpose than, for example, lecture teaching. The power of dialogue is commonly used to create data out of noise, to create information out of what was so unknown (and perhaps unsuspected) as to be beyond that which was perceived. It is used to give fresh conceptual hooks without which data would be so meaningless as to be beyond perception. During dialogue, a pattern emerges from what was meaningless and random. This is what real learning and unlearning (destructuring the obsolescent concepts) is about.[9] Thus, (7) in dialogue, the changing in entrainment of many levels of synchrony and isomorphism allows significance to grow out of the slightest variations that happen at a control point—a point where a small change makes a large difference in the way the total organization goes. In dialogue, there is continuous identification of those points where slight change will induce significant new recognition of pattern. That is why the amount of information that can be exchanged is of a higher order than in nondialogue systems—a considerably higher order.

In sum, the most delicate matchings of stages so that two systems (either man-and-man or man-and-machine) communicate optimally for the purpose of unlearning conceptual and control obsolescence will occur during the dialogue.

As McCulloch and Brodey phrase it, “dialogue is not a simple alternation of active speaking and passive listening turn by turn. Both partners are continuously observing and sending many cues. It is a closed loop of many anastomotic branches through which there runs at a split-second pace an ever-changing symphony and pageant relating man to man ever more richly.”[1]


Thus, if men are to use machines for learning, they must see that these machines incorporate the capabilities of evolutionary dialogue in order to enhance the possibilities of enriched information exchange. It is even conceivable that in dialogue with machines, man may discover prejudgments and preconceptions that are so omnipresent with men as to render them utterly automatic. If this is indeed the case, the way could be opened to modeling and discovering the deepest laws of man’s learning behavior, thus also opening the door to making teaching a science rather than an art presently enjoyed only by the gifted few. If education’s purpose is indeed human enhancement, then such man-machine education would be human enhancement par excellence. Such heightened teaching would also enhance the human’s capacity to teach other humans directly.


Man has always yearned for heightened perceptions and insights, for the truth about himself and his world, and for deeper communion with his fellow men. The drive for man to model or map in his own mind the nature of life and of the physical world is virtually automat­ic, and seems related to man’s survival. The drive for such knowledge is at the heart of science. In some periods, men have sought heightened percep­tions through starving themselves, through living alone in desert wastes, through self-tortures of all kinds, through good foods, through love, through vigorous athletics. There have always been, so far as we know, natural drugs and alcoholic beverages, and today there are a great variety of these, of which LSD is probably the most spectacularly publicized. Each epoch has practiced its own rituals and utilized its available media. Now, you might say, we are proposing to employ machines for similar purposes.

But the reader is misunderstanding us if he thinks this is all we mean. We are not urging merely a new kind of calisthenics, although it is not hard to imagine that intelligent evolutionary devices would be used for such purposes (especially when such devices become cheap enough and easily available). What we are urging is that engineers become aware of the new tools of artificial intelligence that are now falling into their hands. Ma­chine intelligence—logic boxes, if you will—could give machines a capacity to interact with the human at a level of detail that isn’t restricted to a simplistic game. We are urging them to set themselves up to explore the evolu­tionary capabilities of man and to investigate the various aspects of the phenomenon of dialogue. We are saying that the situation now vis-a-vis intelligent machines is analogous to the situation of man at the beginning of the industrial revolution. At that time, men must generally have held the concept (rapidly becoming obsolete) that work was for the muscles of men and for animals. But along came the engineers finding ways of distributing muscle work among machines. At first, the machines were expensive, and men had to be brought together in work pools, in factories, near the machines and their sources of power. Then engineers found ways of distributing energy more simply and economically, so that now wherever you have electric plugs, you have muscle power to run dishwashers, air conditioners, etc.

In this new epoch, which Wiener called the Second Industrial Revolution, we are beginning to see the evolution of distributed intelligence, and men may begin to discover ways in which certain tasks of intellect and control, which we have long considered innate to man and as part of his privileged domain, may be delegated more economically and more satisfactorily to his environment. But to do this, the engineer must throw over old habits of thought, which are certainly relevant to the purely physical environment, and he must discover how to conceptualize man. He must learn the laws for observing in the situation where observer and observed are of the same species and influenced by each other’s acts even as they occur. These laws of operation are manifestly different from physical nature. The engineer, we believe, must go about discovering the evolutionary character of man through essentially evolutionary processes. He cannot start out measuring and specifying man with set physical parameters brought over by main force from the world of physics, from mainly cause-and-effect models. For this purpose, man must be measured as an evolutionary creature. The new tools of artificial intelligence make it possible to synthesize and model evolutionary processes in man, because these new tools can also be given evolutionary powers and can enter into dialogue. Nor are we talking here about some form of “average” evolutionary process. Plainly, some men are geniuses, with mighty capabilities of conceptualizing, and other men are dolts, who nonetheless yearn for satisfactions they should not be denied. And some men may seem like dolts, but may well harbor perceptual powers and views that they have been unable to express or formalize within the available means and that society has not learned to appreciate or tap for its benefit. Through new intelligent media and tools, such men might well “come to life.” But we will never know for sure until we have tested and tried the limitations and the possibilities of the new media.


Thus, the new evolutionary tools, in their “nature,” should be shaped with a “requisite flexibility and variety” to satisfy individual users. Certainly, in the beginning (now!), the efforts at bringing evolutionary powers to our machines, and the enhancement of human capabilities, must be modest, but the evolutionary process itself is bound to proliferate into steadily deepening possibilities.

Not least of all, we must consider the incalculable benefits that could be brought to the young, the next generation. In point of fact, we should remember that engineers today are largely designing the environment for the next generation. The new generation, the young kids, who are open and alive and curious and experimental, who are learning the new science, who are learning new concepts, won’t, through the new evolutionary tools, be restricted by the relatively simple formal means of our generation (e.g., the workbooks with blanks that the child or man must service). The simple linear and Aristotelian conceptualization that has governed the learning process up to the present has, on the whole, been more stultifying than enlivening. It shut out, rather than permitted, the metabolism of novelty on which the human spirit feeds. Nor did this older formal means allow the control finesse necessary to drive a student safely beyond his conditioned fears, to disorganize his conventions of what is humanly possible, to drive him just far enough into ambiguity, confusion, and absurdity where he could reorganize his mental patterns in accord with a deeper reality. Such evolutionary tools could make better scientists of the young, or better doctors, or better psychologists, or whatever. Young children are the world’s “natural” scientists. Through new media of modeling and conceptualizing, their whole conceptual training could evolve faster and more richly, their curiosities and capabilities could be enhanced rather than quashed by the machinery of education. Education would be made relevant to them and their personal lives; it would be more than just something out of a book. Again, we won’t know the possibilities in this direction until we have tried.

The devices of entertainment that could grow out of intelligent machines could be enormous. We won’t even bother trying to specify what such devices might be like. Suffice it to say that any device can be treated as a toy; we are safe in assuming, we believe, that there will always be entrepreneur types who will find novel ways of exploiting such devices. Not that we have anything against toys; we ask only that they be lively enough to help us enjoy our own aliveness.


The reader who has come this far with us must sense the open-ended, rather “soft,” unfinished character of the ideas we have put forward. Perhaps, he might say, it is far too early to attempt to crystallize ideas that are still unfolding. But—and this is a matter of judgment—we believe that the accelerating effects of our technological pollution give us very little “lead time” in bringing these effects under human control. We do not think of evolutionary technology as utopian, but necessary; and we think the time for engineers to join in the necessary dialogue is now. The decisions about the deployment of government resources to answer the problems of technological pollution are being made now, and these decisions could have positive effects on the life we enjoy in the future, or they could lead to waste and irrelevancy in that life.

There will be those who will object that the computer construction art, and the science of artificial intelligence, is too little advanced to undertake the kinds of evolutionary tasks we have talked about. But we must be careful not to misjudge the breathtaking swiftness with which the computer art is exploding within our social organization. The scientists of artificial intelligence—Minsky, McCarthy, Simon, Newell, Samuel, Papert, and many others—are busy evolving their machines; the cost of on-line computational capacities is dropping at a remarkable rate; and time-shared computer systems, regarded as the necessary take-off stage for widespread on-line intelligence, have been pushed hard in the past few years by Licklider, Corbato, Fano, Shaw, Selfridge, and many others.

Our computers are still young, and despite all the bluster about their powers, are still more like insects than ammals—hard-shelled, quick, busy, rigidly constrained in their maneuvers, persistent and exacting in their repetitive tasks, and rapidly multiplying. If we manage them in the way we have managed our earlier machines, and give them anarchic powers within the human community, we too shall behave in a more insectlike way.

But our computers are growing in influence, as well as intelligence, so there should be support for evolving their “sensitivities” in using humanlike intelligence.

Properly managed, these new computational powers could bring a new beauty and true functionalism to engineering, could mediate between us and the harsh automating effects of our present technology, could bring new satisfactions to the human users of technology, and could perhaps stabilize the rapid change of our environment. Although the work has begun, it needs the momentum of the whole community of engineers. The lead time is short.

A subsequent article will discuss practical examples of evolutionary design that are now under way or being contemplated; it will aim at concretizing the questions that have been treated here in a philosophical vein.

A word about this coauthorship. The ideas and the philosophical outlook are Brodey’s. In order to elucidate the evolutionary idea, we have engaged in the kind of dialogue described in the article. Original photos are by courtesy of George DeVincent.


Arbib, M. A., Brain Machines and Mathematics. New York: McGraw-Hill, 1964.

Brodey, W. M., “Developmental learning and education of the child born blind,” Etc., vol. 23, pp. 293-306, Sept. 1965.

Brodey, W. M., “The clock manifesto,” Ann. New York Acad. Sci., vol. 138, pp. 895-899, 1967.

Fogel, L. J., Owens, A. J., and Walsh, M. J., Artificial Intelligence Through Simulated Evolution. New York: Wiley, 1966.

Lindgren, N., “Human factors in engineering,” IEEE Spectrum, vol. 3; pt. I, pp. 132-139, Mar. 1966; pt. II, pp. 62-72, Apr. 1966.

MacKay, D. M., “On comparing the brain with machines,” Am. Sci., vol. 42, pp. 261-268, Apr. 1954.

MacKay, D. M., “Self-organization in the time domain,” in Self-Organizing Systems—1962, ed. by M. C. Yovits. Washington, D. C.: Spartan Books, 1962, pp. 37-48.

McCulloch, W. S., “Commentary,” in Communication: Theory and Research, ed. by L. Thayer. Springfield, III.: C. C Thomas, 1967.

McCulloch, W. S., Embodiments of Mind. Cambridge, Mass.: M.I.T. Press, 1965.

Minsky, M., “Artificial intelligence,” Sci. Am., vol. 215, Sept. 1966.

Rosenblueth, A., Wiener, N., and Bigelow, J., “Behavior, Purpose and Teleology,” Phil. Sci., vol. 10, pp. 18-24, 1943.


McCulloch, W. S., and Brodey, W. M., “The biological sciences,” from The Great Ideas Today 1966. Chicago: Encyclopaedia Britannica, 1966.

Von Domarus, E., “The logical structure of mind,” in Communication: Theory and Research, ed. by L. Thayer. Springfield, Ill.: C. C Thomas, 1967.

Wiener, N., The Human Use of Human Beings. Boston: Houghton Mifflin, 1950. Reprinted as Doubleday Anchor Book, 1954.

Johnson, H. W., “The university of the future,” Inaugural address as 12th President of M.I.T., Oct. 7, 1966.

McNamara, R. S., Address before Millsaps College Convocation, Jackson, Miss., Feb. 24, 1967.

Freud, S., The Interpretation of Dreams, trans, by James Strachey. New York: Basic Books, 1955, p. 500.

Shannon, C. E., and Weaver, W., The Mathematical Theory of Communication. Urbana, Ill.: Univ, of Ill. Press, 1949; publ. earlier in Bell System Tech. J., vol. 27, pp. 379 and 623, 1948.

Pask, G., “My prediction for 1984,” Prospect, Hutchinson of London, 1962.

Brodey, W. M., “Unlearning the obsolescent,” IEEE Systems Sci. and Cybernetics Conf., Washington, D.C., Oct. 17-18, 1966.