Is This Machine Intelligent?


Johnson’s paper — cowritten with John Kotelly — updates the well-known Turing Test by claiming that machines would only be intelligent once they can understand jokes, i.e. recognize unexpected shifts in context.


Many attempts to define Intelligence ignore its fundamental nature as a quality of externalized behavior and therefore as independent of the internal structure of the organism or system producing it. The necessary properties of intelligent behavior are discussed and a form of test proposed whereby a man, presented with an allegedly intelligent machine, could convince himself of the truth or falsehood of the allegation. Interaction with an intelligent examiner is found to be necessary to determine the process of contextual mapping performed by the machine and the test determines, as a primary feature of intelligence, whether the machine is capable of the grasp of meaning under a shift of context.


In the course of seeking a definition for intelligence we wish to avoid the polemics which ask whether man is a machine or whether machines can be made to “think”, but rather we will consider intelligence to be a quality of behavior. That is, whether the system we are to test for intelligence is of biological or mechanistic origin is of no consequence to us since the quality for which we are testing will be observable only at the interface between the system and the environment: between the organism and its world. So far as our investigation is concerned, the internal workings of the system under study are irrelevant to the definition and questions regarding them shall be off-limits until the definition is complete.

As a working frame of reference, then, we shall imagine that someone has brought in to our office a cabinet ostensibly containing complex hardware, has demonstrated to us the means of communicating with it, and has departed after claiming that, we are standing before an intelligent machine. How do we go about convincing ourselves of the truth or falsehood of that claim? Note that we are not trying to determine whether what resides inside the box is a man, a monkey, or a computer, but rather whether the unit as presented is capable of something we would call intelligent behavior. It is time we came to grips with what we mean by such a question.

Let us begin by considering some of the qualities of behavior which are not acceptable as descriptors of intelligence. Firstly, specialized and amazing talents which the system may possess such as a seemingly bottomless memory for facts with instant recall or the ability to sum vast columns of figures may be gratifyingly useful but it would be a mistake to call such talents indicative of intelligence not only because unintelligent machines may be made to imitate these behaviors but also because the designation would be redundant. Even the ability to name objects or identify patterns in a manner which has been “learned” may be mechanized to a considerable degree; but here we must be careful to define the procedure adequately. More will be said of this later.

Secondly, we will be unable to determine whether the machine is intelligent unless we interact with it. That is, no amount of passive observation on our part, of the machine’s behavior in a passive environment will suffice as a test. It will be seen, in fact, that we may only make a determination by observing the interaction of the machine and another system or organism (ourself?) which we know to be intelligent.

Thirdly, it will be found that the interaction in which we must engage with the machine will require communication with it in at least two modalities. The necessity for this multi-modal interaction is not immediately obvious but should become apparent later. The question might arise, for example, whether a rigorous test might be managed for the case of communication with the machine solely by way of a telephone connection. The answer involves consideration of the state of sophistication of the machine when it is presented to us: that is, its model of our world which it contains already.

If it is capable of intelligent behavior but is as yet inexperienced in our world, then it must gain that experience through interchange with us. It will be seen below that no single modality can afford the machine an opportunity to tulle up a repertoire of metaphoric identifications which will be necessary for its demonstration of intelligence.

What can we do? What shall be the form of interview which will give us a unique answer; an interview invulnerable to any attempt at fraud?

We can ask questions but those requesting facts unrelated to our immediate experience with the machine will test no more than what has previously been stored within its memory and the accuracy of access to it. We can teach it the rules of games -- or ask it whether it knows them already — and proceed to play, but the machine’s success against us will only be a measure of our ability to formulate the rules in machine language and of its facility for following them. Perhaps it will evolve that the existence of intelligence is more in evidence when the machine is able to decide, in an on-going situation, whether or not a game is in progress or whether some other form of interactive behavior is more appropriate. Let us hold off on that point just for a moment.

Since our interaction with the machine will be necessary in some form in order for our observations to be active rather than passive, it is apparent that what we want to do is to require the machine to learn something from our instruction and to respond intelligently to our examination of that learning. But let us not imagine that the establishment of a conditioned stimulus, or some other combination of input-output, cause-effect, stimulus-response pairing is an adequate indicator, intelligence must denote something more than that if it is to be meaningful; it must, in fact, denote a grasp of meaning, and a response resulting from that grasp.

The meaning of a word or an act or a symbol is always imbedded within a metaphor and that metaphor is determined by a context within a world. We may assume that the machine and we exist in a common world and so exclude considerations of the world as a variable; but let the form of a metaphor vary or a context take a sudden shift, and any meanings which had previously been agreed upon are lost unless the new metaphor or context is within the grasp of the machine. We must ask ourselves in what manner meanings arise: for us, bow7 we corrie to “know” them, and how we implement that knowledge in our dealings with the external world. Only when we understand that process may we seek to require similar externalization of intelligence from a machine.

Of what are the meanings of things or acts composed? The question appears difficult to answer but at the same time a valid answer is deceptively simple. Meaning is determined by the context within which an object or its symbol is found, and the observer’s grasp of it depends upon his present or prior participation in that context, in a real or symbolic manner. Man, with his verbal faculties, is capable of vicarious participation and hence of symbolic grasp of meaning without direct involvement, but his grasp will be in terms ultimately of contexts with which he has been directly involved.

“Context is an operator which selects out the entailment(s) or meaning(s)....from the set of all possible entailments (or meanings) existent on some given model.”[1] Context places both you and the object “in the’ picture”; if you have had no prior experience with an object or its surround, then neither can determine a context for the other and both are “meaningless” for you. A familiar object may not even be recognized if the context in which it is viewed is sufficiently inappropriate — but once it has been recognized, the search for meaning in the new pairing commences a fortiori. Subjective examples will be waived.

If, then, we are seeking to examine the claim that a machine is capable of intelligent behavior, we must be prepared at least to test the responses which indicate its grasp of a change of meaning brought about by a shift of context. First, however, we must establish a common contextual milieu and for this purpose an interactive exchange of communications may be necessary. During the exchange it will be our purpose to discover in what manner the system organizes itself with respect to its world: how, that is, it undertakes to investigate the relation of the components of its environment to itself.

What we shall in fact expect of the machine is that it set up within and for itself a model of its environment which is descriptive of intensional relation Furthermore, we shall not be allowed to ask to see the model itself but rather we shall have an opportunity only to test through interactive dialogue its subsequent translation of those intensional relations into appropriate responses- In our uses of computers we are accustomed to considering them as unexcelled in searching extensional parameter spaces: looking up the names, of all objects having a certain property which we can define; and so they are. However, we are also accustomed to having to perform the task of comprehensive intensional definition for the computer: selecting and defining for each object with which it is to deal all those qualities which we consider pertinent. Of an intelligent machine we will demand that the exploration of intensional parameter spaces be its responsibility rather than ours. Therein lies a vast difference.

Intelligence organizes information and for this purpose it relies upon a modelling system wherein relations of things and of other relata may be juxtaposed and associated. It must not demand that its memory be infinite nor that access to all points be instantaneous, but rather that the modelling system act as some reliable strategy by which appropriate use may be made of context in the establishment of metaphoric identifications. A metaphor is an implied comparison, a non-determinate pairing of things whose meaningful effect, each upon the other, is dependent upon the context framing them. This non-determinism of metaphor, which might at first appear to operate to the detriment of communication, is in fact what makes our world much richer for us. immediate experience in the world, sensed as a pairing of perceptual processes and stored as pairings in a relational modelling system, is again brought forward during intelligent interchange when the ambiguities of metaphoric communication make vicarious experience as-if-our-own. The foregoing tells us something, else: the modelling system must be self-referent, and must therefore contain a model of itself also.

Let us bring the foregoing discussion more to life by way of parable. A well defined set of formalisms would be more useful in the long run but not so likely to convey immediately the purpose of this paper. Such a formal treatment may be found in Hermann and Kotelly[1]; this present paper should be considered an example of an application of that one.

We shall therefore waive formulation of an algorism whereby someone else.’s machine may be tested sufficiently. Let us suppose that you are to be placed within the box, provided with whatever communications to the outside are allowed, and that you are then to convince us that you are an intelligent machine. If the simile is apt, of having been committed unwillingly to a mental institution whereupon the onus of proving your sanity is upon you, so be it! Where would you begin?

If we elect to remain silent and immobile, you are without recourse to demonstration of intelligence nor could you even be certain of our attentive presence. The situation would differ little from that of a tape playback in an empty room. What you want is something from us to which you can respond, something that will allow you to indicate that you have taken into the box more than a set of mechanical aids. The question arises: what can we allow you to take into the box with you? The question is central to our considerations.

If you are an intelligent, sentient creature who has been substituted for a machine trying to prove its intelligence from inside a box and you have been provided access to the world only by those channels which would have been available to the machine, can we allow you to take with you knowledge of past experience which you have had the opportunity to acquire in ways common to us but not common ordinarily to machines? The answer is yes, but with a caveat. To deny to either a programmer or a designer of machines the opportunity to apply the utmost of his art would be unfair, for we must not prejudge the intelligence of what is inside the box. The limits on our testing will take shape at the interface we share with you: on our use with you of the available channels of communication. The caveat that obtains must set us to wondering how experience with the world might be programmed into a machine which did not have at its disposal the multi-sensory pathways by which it could have had the experiences itself. To put it another way: if you take your experiences with you, of what use are they if all you have as interface is a typewriter console or a telephone handset? Your intentional images, whether eidetic, haptic, kinaesthetic, or verbally vicarious will have little value to you unless you are afforded the chance to exercise them in the new ones to be determined by the contexts that we will set for you.

Supposing then that our path of communication is a verbal one, we might start by telling you a number of stories -- some humorous and some not — and observe your affective reactions. This might also constitute part of a reasonable test for sanity. The humor of a joke’s punch line is lodged in the way it suddenly shifts the context of what has gone before and causes us to re­construct the story for ourselves in a new frame of reference. Depending upon your degree of sophistication you may laugh immediately, indicating that throughout the telling of the joke you were at least subliminally aware of the possible ambiguities of context, or more belatedly if your reconstruction is belabored. Someone may raise here an objection that sophistication and the appreciation of humor are culturally determined, but that objection is no objection at all! Intelligence is in fact an assimilator of culture. If you were not formerly familiar with our contextual references, then part of your burden of proof will be to apply your powers of intensional grasp to gain such familiarity. For that purpose you will need to ask a lot of questions but we will answer them patiently.

One might assume that some forms of humor exist whose contexts are culture-free and so we look for samples among those that children understand and enjoy. We will find that the contexts that determine their affective states always refer to multi-sensory experience: a story told which builds a structure in one sensory mode, followed by a punch line shifting the context to another mode. The “cultural” dependence is the simplest possible: that, of one’s own (non-vicarious) experience of the recent past: experiences in which a pairing of at least two sensory events occurs so that metaphors are subsequently possible.

Further examples could be detailed for game playing, the use of tools, or the recognition and appreciation of art forms, but the general framework of the test for intelligent response would be the same: the making of metaphoric identifications and the grasping of the change of meaning due to a shift in context are essentially complementary talents. Both achieve the same result but the process is not the same. Each variation in form of activity calls for a different pattern of sensory interaction; some ’will assume extensive, detailed past experience, some will require only the instant pairings of experience and dialogue that are serving to build up a set of mutual contexts. For any given machine with which we are presented, an appropriate milieu for examination must be chosen so that the full range and repertoire of its potential dialogue may be exercised, we shall not depend for our verdict upon metaphors or contexts with which the machine has not been or cannot become familiar.

We have not set forth here a ritual algorism for the sufficient cross-examination of any allegedly intelligent machine for, had we attempted such a script, it would necessarily have been self-defeating. Nor can we say definitively that only certain structural forms are capable of achieving intelligent behavior. No estimate of the qualities of externalized behavior can legitimately be based upon a foreknowledge of the structures or operational subroutines producing them. At the same time, however, it would seem a poor risk to attempt the setting up of intensional relations with context-free languages, but we do not imply that contextual operators may not be invented for them.

At the risk of provoking the ire of the more symbolically inclined, we have not paced this paper with the development of formalisms[2] but have attempted instead to involve our audience more directly in the processes discussed. We hope that the understanding of either view will be aided by the other.


Howard T. Hermann and John Christopher Kotelly, An Approach to Formal Psychiatry, Perspectives in Biology and Medicine, Vol. 10, No. 2, Winter 1967.

Warren S. McCulloch, Lekton, Part XIV ofCommunication Theory and Research, Lee Thayer, Ed., Charles C. Thomas, 1967