A very dense article that Brodey co-wrote with Warren McCulloch. Most of the theoretical scaffolding in this piece is McCulloch’s, but Brodey’s contribution comes through as well, particularly towards the end as the article discusses the importance of context and dialogue.
For this year's review of the biological sciences we have asked Dr. Warren S. McCulloch to survey recent developments in neurology and particularly the modeling of the brain and central nervous system as an electronic, chemical, and mechanical system.
Dr. McCulloch is a neurophysiologist, a psychiatrist, a philosopher, and a poet; he is one of the pioneers of cybernetics; yet he prefers to think of himself as “an experimental epistemologist.” He was born in 1898 in Orange, New Jersey, a descendant of the man immortalized in the Supreme Court decision of McCulloch versus Maryland in 1819. He was educated at Yale University and Columbia University, from which he received his degree in Medicine in 1927. After laboratory work in neurology at Bellevue Hospital, he worked in mathematical physics at New York University and then took psychiatric training at Rockland State Hospital. From 1941 until 1952 he was Professor of Psychiatry at the University of Illinois. In 1952 he became head of the neurophysiology group in the Research Laboratory of Electronics at Massachusetts Institute of Technology. He is the author of more than 170 scientific papers, a selection of which was published in 1965 under the title Embodiments of Mind.
Dr. Warren M. Brodey has been a collaborator with Dr McCulloch at the Research Laboratory of Electronics since 1964. A Canadian by birth, he received his medical degree in 1947 from the University of Toronto. After practicing child psychiatry in Boston, he studied intra-family communication as a research psychiatrist at the National Institutes of Mental Health at Bethesda Maryland.
Today no one can write a synopsis of all that is new in biology. Descriptively its data are too vast, its disciplines too dissimilar, and its growth too rapid for our comprehension. Each must see it from his own angle. Both of us are psychiatrists. What we can tell you will unfold from our viewpoint. Our business is to understand the biology of people. We must, of course, see each person as an active physical structure. But that is not enough. We must see man as a system handling information in order to survive and in order to enjoy the most intimate forms of communication which Donald MacKay correctly calls “dialogue.” This enables man to learn as is necessary for the survival of the species. Thus our biology (like its counterpart in engineering) distinguishes work from energy and signal from noise.
Man’s attempt to learn to know himself and his place in the universe is older than all written records. His powers of perception and expression were then superb. Witness his cave paintings. His sustained quantitative observation and construction produced Stonehenge. He had learned to kindle fire and to make wheels. Were they, our ancestors, alive today and had they from birth the environment we share, they would probably be as good scientists as we are. What has made the difference between man then and now is focused for Western man in Greece and is a matter of record. It sprang from the Greek conception of lawfulness pervading all nature. In this society geometry and logic flourished. Aristotle, like Darwin, made rich observations, and biology reviewed not only the classification of genus and species but also the principles that underlie “bound cause”—that like begets like—the heart-centered and then the nerve-centered theory of knowledge, and the foreshadowing of two other laws which were not only the foundation of their city-states but also of their biology on, called the “equality of unequals,” that no matter how dissimilar we n^ be we are all alike before the law, for health requires the harmonious team play of the many dissimilar parts severally necessary in living systems; die second, “general because best,” that the idea or form among many which is to be widely accepted has to be the one most likely to succeed. The general is to be chosen who comes first to the best opinion. Finally, they had clearly separated living things from all others, by this-that the living have their own ends, hence our notion of function as the end in and of an operation. These were the foundations of their great school of medicine. By a.d. 200 they had an excellent gross anatomy of the brain. Then came the Roman conquest followed by other-worldliness, and biology lay dormant for a thousand years.
Modern biology begins with modern physics. Leonardo da Vinci, picking up where Archimedes left off, generalized the theory of the lever and understood properly the action of muscle and tendon at a joint—thus functional anatomy. Galileo carefully excluded anima, soul or mind, as an explanation in physics; Descartes conceived beasts and men as automata governed by physical law, the soul of man sitting idly by. This led him to postulate the nervous impulse and the first feedback device, by way of a thread returning up the nerve to shut its valve when its impulse had done its work on muscles. At that time Leibniz was trying to build a computing machine. In his Monadologie he says that if he succeeds in making his computer so that it can think and perceive and feel as we o. then he could make it as big as a mill, but, if we were to wander around within it, we would not see thinking, perceiving, or feeling, only forms in motion. This holds not only for modern computing machines but also or brains. Yet looking at the forms in motion is what we have been doing, and it is in this biophysics that we have been most successful. It cannot explain mind, but it can lead to a physical understanding of how brains work, and this will be our first concern.
By the end of the nineteenth century, the methods of classical physics had given us vast knowledge of growth, form, and function of many living systems. We had a fair knowledge of the anatomical form of the brain, and some knowledge of the functions of its organs derived chiefly from the effect of destruction of its parts. We knew much about reflexes. We had begun to understand some of the chemistry of the brain and were able to detect grossly its electrical activity, using capillary electrometers and galvanometers. About 1930 came modern amplifiers and then microelectrodes letting us look at the activity of single units. Modern neourphysiology got under way. Today one can scarcely keep up with its flood of publications.
At almost the same time modern biochemistry of brains began and has shot ahead, in part out of our necessity to understand the nerve gases of World War II. The specialist is swamped in his literature. Neuropharmacology is now going at the same gait.
Ordinary microscopy had revealed the organelles of protozoa and some details of other cells without fixation and staining, but phase-contrast microscopy made it possible to see the activity of organelles in motion and histology came alive. Concurrently, tissue culture and tissue transplants began to give us control over the factors determining growth and regrowth. Thus between about 1908 and 1940, our picture of the biologic relapse growth from the static ones of Ramón y Cajal's Textura del sistema nervioso del hombre y de los vertebrados and D'Arcy Thompson's On Growth and Form, in which processes could only have been inferred, to the direct observation of structures moving at least as fast as the eye could follow them. For this classical physics had sufficed.
But more was in store for us. In 1908 the atom ceased to be the simple solid thing it had been. Biology felt this impact of the coming atomic age by way of X rays giving us first photography and fluoroscopy of opaque objects, then treatment for neoplasms, and, latterly, X-ray microscopy with always greater resolving power, down now to less than 10 angstroms, with our hopeful friends expect it to reach 1 A soon.
With the knowledge of isotopes, differing in weight and stability, mass spectography, scintillation counters, and radioautographs allowed us to follow tracers and to employ beams of many kinds of radiation to produce isolated lesions and to induce mutations. For an understanding of many of the structural properties of molecules, crystals, and parts of cells, X-ray diffraction and magnetic resonance have proved of great importance. The pressing problems raised by our accelerators and by the threat of nuclear war have attracted many first-rate young physicists, given them the tools, and created the posts for a rapid development of radiobiology which is now receiving a new boost from the space age.
This comes first from the hazard of sunspots; sun flares greatly increase the radiation in space by particles moving so fast that their impact is much augmented by their velocity relative to the self-contained space capsule in which we must shield the astronaut and his biological environment. The space age has forced us to look into closed environments where chlorophyll is of importance in utilizing carbon dioxide. We must know the effects of weightlessness on plants, as well as on men whose blood pressure tends to fall and whose calcium and phosphorus pour out of him—problems that cannot be fully investigated on earth.
What is more, the questions of extraterrestrial life, besides the problem of sterilizing everything to land, say on Mars, have compelled us to look at the varieties of chlorophylls, some of which require oxygen, others adapted to function under an atmosphere whose oxygen was probably all liberated in an originally reducing atmosphere of methane, ammonia, and, perhaps, some carbon dioxide, like the atmosphere that may envelop Venus. The question as to whether there is life on Mars is now debated. Mars has some atmosphere, probably somewhat less than a twentieth of ours, with some oxygen and much CO2, and its white polar caps are thought to be icy. Living things as we know them are chiefly water organized by macromolecules. Life not too unlike that on earth is not impossible on Mars. In a few years we will probably make a soft enough landing for instruments to detect it.
Listening to many debates and reading voluminous NASA reports, it becomes clear that most biologists mean by “life” a process of growth, reproduction, and usually movement of some kind. They seek for ways to detect these and relay the news to earth. Most plans presupposed that the metabolism of life on Mars will involve the same chief constituents we find here. These they would detect chemically. But to detect movement, they would like something comparable to vision. The communication back to earth is too poor and too slow to relay back televised pictures frequently enough for us to judge of motion. Our device must have ways of detecting it and noting shapes moving and at rest. Again, this new requirement presses us forward. The device for detecting movement and shape must have some ability to think, therefore some computer, to select and to compress the significant data. There are many hostile environments, and sometimes simple requirements of size, weight, and speed preclude our sending a fellow man. To replace him with proper simulation in hardware requires that we know well what he does, and, because he has evolved to do it well, it pays to know how he does it. Such simulations created bionics, which has joined the biologist and the engineer in working teams to the advantage of both. Similarly, the problems of telemetering significant measures of life processes, or of hooking the living system directly to a computer, have produced team play, generally fostering a new field of bioengineering. Today there are machines teaching differential diagnosis, and there are prospects of automated clinics as well as artificial pacemakers for failing hearts and artificial kidneys. Every simulation sharpens the biophysical problems and discloses any inadequacy of our conception of the function in question.
Contrasted with these rapid and urgent contributions of biophysics and bioengineering, the natural pace of biology is slow for many reasons. This is in large part because it is descriptive before it is anything else. Its observations often require years of careful study to reveal a single process of development and aging, and even longer to detect the effects of selective breeding. This is one reason we know so little of the larger and long-lived mammals in which we are most interested. Moreover, even with inbred strains, living things are so various and their environmental dependecies so complicated that one often needs to study large numbers of creators under a host of conditions to discover a general regularity.
This is perhaps the chief reason why general biological theories are Wallace could say in a short essay what took Darwin years of study and his ponderous Origin of Species to demonstrate, namely, that spontaneous generation of variants and a selection by the environment for evolution. The proof is more difficult and time consuming than one might expect, for often when placed in a new environment, a creature makes a significant change in its apparent form that persists for generations in the new environment only to revert to its original form off offspring when put back into its original environment.
This difficulty, in its most piquant form, was studied by Sir Bryan Matthews who grew rats in a centrifuge, increasing the gravitational with each generation so that it always exceeded that at which the previous mother could have carried her young. When he demonstrated them at the Cambridge meeting of the Physiological Society in 1953, they walked upright on massive hind legs and were capable of jumping to great heights Yet their offspring conceived and raised under normal gravity were like their ancestors prior to the whole experiment. Bacteria, with their rapid reproduction, quickly change the enzymes they produce to match the media in which they are placed, and so confuse us in questions of mutation, somatic inheritance, and progressive adaptation Any and all of these and other mechanisms, comparable to those underlying differentiations of cells in a single organism, may account for our ever present woe in the appearance of strains of pathogenic organisms that have become resistant to particular chemotherapeutic agents. At this writing the pressing case is the development by syphilis of strains resist ant to present antibiotics. Time will answer these questions.
There is now a quickening of the pace in comparative anatomy of the nervous system and in comparative physiology. Yet there are many years of work behind J. Z. Young's The Life of Vertebrates and his work on the octopus as there are behind T. Bullock's two great volumes on the neurophysiology of the invertebrates. From the vantage point atop accumulated knowledge of comparative anatomy, comparative physiology, and ecology, E. Lloyd Du Brul is at last writing the first chapter of a book that will, in a sense, give us a critically scientific synopsis of evolution of the vertebrates.
Like begets like not only with the slow- variation underlying evolution but also with the surprising appearance of many hybrids. These must be distinguished from mutants by the reappearance \of ancestral traits in the offspring of the hybrids. These were best known in domestic plants. Darwin studied them and knew much of the work on them by his contemporaries, for he often quotes them. But he never mentioned Gregor Mendel’s work, which appeared just one hundred years ago and was ignored until 1900. It marks the beginning of today's theoretical biology. Mendel, with a training in mathematics and physics as well as in horticulture, proceeded, like Galileo and like Descartes, by postulating miniature entities invisible to him–we call them genes–and a hypothetical mechanism of their relations, to predict the observable traits of their offspring in ratios approached by adequate samples. This is a truly mathematical biology employing algorithms to compute the expected number of phenotypes in successive generations. Actually, as Fisher has noted, Mendel’s reported counts fit the ratios better than he had a right to expect, and the traits he studied were ideally suited to his problem. As he himself noted the story was more complicated with other traits, and he could not solve them. Fortunately, the mathematics he needed has evolved, and the data are' now sufficient for us to have a truly mathematical theory of population dynamics. The use of algorithms for other biological processes is today being pushed ahead by W. Stahl in this country and by A. V. Napalkov in Russia. Mendel’s use of postulated unitary entities has been followed by the pioneer scientists in vitamins, hormones, and enzymes so that today there is a great body of truly theoretical biology leading us to an appropriate biophysics.
At the moment the most spectacular of these developments is in genetics itself. This began early in this century with the identification of the chromosomes as carriers of the genes and even then went so far as to locate genes in particular neighborhoods on particular chromosomes. There is always a danger to be avoided here—a particular chromosome, say the X chromosome of man, is not the physical object one sees under the microscope any more than the letter x is the ink you see on this piece of paper. A gene is an informed and informing process through which energy and matter flow as long as the system is alive. A gene is an en-co cd message, not the particular macromolecule or deoxyribonucleic acid on which it is located at a particular moment. T.H. Morgan in his writings on the fruit fly is very clear on this distinction, and it is his teaching that inspired “A Logical Calculus of the Ideas Immanent in Nervous Activity,” to which we shall return later. We do not want to leave unmentioned the present contribution of Mendelian genetics to neuropsychiatry, for at the last count there were nearly a hundred known hereditary disorders of metabolism affecting the nervous system. Those which Mendelian dominant and of high penetrance may well be due to the crucial deficiency of a single enzyme necessary for the production of a single crucial metabolite which can be administered to the patient, much as diabities can be alleviated with insulin. We should also note in passing gene producing of hybridization in man, in which the aberrant gene producing sickle-cell anemia shows incomplete dominance, that gave us the first wedge into the location of an inappropriate amino acid in a particular place in hemoblogbin corresponding to an abnormality in the composition of deoxryibonucleic acid. There are exciting experiments now under way using irradiation to know out immunity, permitting transplants to take and allowing hybrids to be produced by crosses hitherto impos-but we know of none that bears directly on our problem.
Unlike muscles and glands, the brain is not built to do chemical or mechanical work. It has truck with the world by way of information. Charles Sanders Peirce defined information as the (logical) sum of all those propositions in which the symbol is subject or predicate, antecedent or consequent. He called information a “third kind of quantity” and Americans usually refer to it as “a quantity,” whereas the English speak of “a quantity of information.”
Peirce was at heart a pansomaticist like the Stoics, and his symbol, like the thing it symbolized, was a physical thing. So also was the brain of the man speaker or listener, who used the symbol. Similarly, when information theory began about 1927 at the Bell Telephone Laboratories, the problem was how much information is conveyed by human speech turned into electrical signals on a wire and understood by the human listener. This is like the game of twenty questions in which each question has a yes or no answer, commonly called a binary digit –or bit. Provided the receiver knows the language or code of the sender, this fits Peirce’s definition. Our language is redundant. We send more symbols than are necessary for making the number of bits we convey. This is useful whenever there are disturbances in the channel. Information theory is concerned with how much redundancy is needed to ensure a message transmission with a given probability of success when there is noise, specified in amount and kind, on the channel.
The theory presumes that the parts of the message are inherently uncorrelated in the sense that the answer to no one question makes the answer to another any more or less probable. In this way the signs are independent of their contexts. Moreover, the properties of the noise can only be stated statistically. In Shannon’s elegant development from the most realistic assumptions, two surprising theorems result, both of which are biologically important. The first is that for a sufficiently large ensemble of messages, while there is an optimum code, almost any sufficiently complicated one chosen at random is almost optimal; and the second, that there exists a code by which one can transmit over a noisy channel with almost no errors provided one does so at less than the capacity of the channel. At higher rates the errors, of course, rise rapidly with the rate. Taking into account the statistical nature of this theory, it is not surprising that information can be equated to ∑𝘱 log 𝘱 which is a pure number, and the same pure number as appears in 𝐊∑𝘱 log 𝘱 of entropy in thermodynamics where 𝐊 has the dimension of energy. For this reason, information is frequently called neg entropy, as if it were a physical thing. Infomraitnk theory, like thermodynamics, is primarily concerned with limits of the possible. The former tells you nothing of how to make a code and the latter nothing about particular affinities of reacting components.
They were constructed for those purposes, and every attempt to use them otherwise has been disappointing. Quastler’s inform information theory in biology and information theory in psychology are good examples of such attempts. As we shall see later, they are helping us in neurophysiology and in genetics in persuading us to look for proper solutions. The English theory of information does not suppose a known code, and hence hence better describes the case of the scientist or any curious creature. It requires that the creature put the question, which determines the “logon,” and that he gather the necessary observations to answer it, each step of which is a “meton.” This degenerates into the American theory when there is only one meton per logon. Gabor, MacKay, and Cherry have made good use of this more general theory, and we expect to see it have unpredictable repercussions in biology, hopefully in the neurological questions confronting us in perception and decision
In 1943, new notions of mechanisms idling information entered the biological field. Craik looked on the nervous system as a calculating machine able to model its world and so to be able to think about it and explain how it worked. Wiener, with the mathematics, Rosenblueth. with the physiology, and Bigelow, with the realization that for governance one only needed information of the outcome of previous acts, described the mechanism guiding purpose behavior. Pitts, with the mathematics, and McCulloch, with the biology, proved that circuits made of such simple threshold devices as formal neurons could compute any number that a Turing machine could compute with a finite scratch pad. In each case the engineers have been quicker to pick up these developments than the biologists themselves. Wiener subsumed all these notions in his cybernetics and defined the problem of information and control in all things, man-made or begotten. As Turning clearly stated, a properly constructed computing machine obviously can think and experience and care even as we do. We conceive ourselves to be such machines. Our difficulties lie elsewhere. We are not referring to the limitations of theory imposed by Goedel's theorem, or the halting problem of a Turing machine, or of recursive insolubility, or the like.
Though, as Bourbaki would have it, we do have a “theory of holomorphic projection of a free monoid onto a finite monoid” to help us think of Craik's modeling of the world in the brain, as yet we have no theory that extends from artificial languages which are context-free to natural languages that are not context-free. Without natural languages it is difficult even to define the learning process we actually use. There is no calculus of intentional relations, the true-false base of our logic being insufficient. We need at least an irreducibly triadic relation to handle the ordinary psychological statements like A points to B to which he wants to draw C's attention. In the extensional calculus a hippogriff and a unicorn are both the empty set. A narwhal is not. It is possible to program a computer to seek the smallest integers x, t/, z, and n>2 for which x^n + y^n = z^n and it will start computing. Yet, because we have no proof of Fermat's last theorem, we do not know whether the answer exists. This shows that the nature of the difficulty is very real, no mere quibbling matter. That we can build and program such computing machines is due to our complete calculus of extension, the so-called lower predicate calculus with quantification, i.e., with “for some . . .” and “for all . . . .” Even that theory becomes mathematically opaque when regenerative closed loops are included to subserve memory. Generally they require a nesting of parenthetical functions of ever earlier times. This difficulty may be solved by finding a transparent symbolism, but twenty-three years of study have not disclosed it.
The problem of purposive, homeostatic, or controlled behavior which is the central theme of cybernetics is up against our inability to foresee what will happen when two or more components are coupled. Even if each feedback loop is stable, the combination of several operating upon a given effector may produce destructive oscillations or lock the system in extreme position. We know tricks to handle this problem when both loops are so-called linear systems, i.e., systems in which an addition of causes leads to an addition of consequences. There can be no general theory of nonlinear oscillations. Only a few simple types can be handled mathematically, usually by the so-called second method of Liapunov. Brian Goodwin has made good use of it in Temporal Organization in Cells. Professor Caianiello, at the 1966 Bionics Symposium, has proposed a sharp physically sound mathematical analysis for handling nonlinear oscillations. It is the first significant advance in this field since Wiener's handling of nonlinear filters. Whatever else a neuron is, mathematically it is a highly nonlinear oscillator. Note that we have been speaking of purely logical and mathematical problems which, regardless of the physics of the system considered, might be made of hardware of any description. Real brains are made of semisolid-state components whose physics we will next consider, because they are the biological substrate of man, real man able to enjoy dialogue. To him we will return from another angle. At every level and between levels there is a dialogue. We shall start from the dialogue of H2Os.
WATER — “THE MOTHER AND MATRIX OF LIFE”
What one would like to do is to start from the ultimate particles, reach to model atoms so as to explain their chemical properties in forming molecules, and then deduce from this a proper model of their aggregations to explain the structure of micelles and organelles, and then from these deduce the structure of cells, thence the structure of tissues determining organs and organ systems, and finally to present the organization of these systems into a lively anatomy of social man. This is, of course, not yet possible. While we do use theories of atomic structures such as the spin of electrons, valence electrons, etc., to help us (1) to imagine the behavior of atoms, (2) to design instrumentation, and (3) to interpret the results in order to determine their momentary organization, the process is far from a pure deduction.
Behavior is characteristic of larger, more complex, structures that can- not be inferred from what we know of their components separately. The same is true of the eternal verities of logic and mathematics. Hence this limitation cannot be attributed solely to the present state of our knowledge of the components. Nevertheless, the properties of components put constraints on the system they compose, and an inadequate knowledge of these components often leads us astray, as in the case of water in living systems.
The liveliness of water and its icy forms in biological organization is a delightful story. We had once conceived of cells as sacks of solutions surrounded by a membrane permeable to some ions, not to others. Our thermodynamics, handling only equilibrium, gave us the famous Donnan membrane equilibrium, to account for the voltage through the membrane. But such a picture of membranes would be true only if they were dead. Guggenheim has shown that the electrochemical potential cannot be separated into chemical potential and electrical potential at equilibrium. About 1930 Teorell, working with Shedlovsky, showed that they could be separated in the steady state, say by a steady inflow of COo into a sack. But the picture was still wrong for want of a clear theory of the thermodynamics of open systems. Moreover, something was wrong either in the detail of the model of the cell membrane or in the cell contents. The energy of dilution was too small to account for the voltage by simple diffusion. Then Shedlovsky showed that over 80 percent of the conduction of electricity by pure water was due to proton hops, not to migration of OH- or H2OH+ ions. He constructed a membrane of thin, soft glass permeable to H+, covered by a thin layer of an insoluble barium soap of a simple 10-carbon acid. With the same solution on both sides of the membrane, it produced voltages comparable to those of cell membranes. Just as a battery, by separating locations of oxidations and reductions, can produce a current over a metal that conducts only electrons, so by separating acid and base by a membrane that conducts only protons, a current can be obtained. This proved that one could produce at least one model that might account for the voltage through a membrane. Several things happened that forced the biologist to look into the structure of water in living systems. During World War II studies in explosive decompression of tissues showed that cells were not ruptured but that intercellular spaces expanded making a spongy structure, even when the gases like CO2 could have passed into cells where there are hosts of partides that might have served as nuclei for bubble formation. Next, with the advent of microelectrodes and their insertion in the giant axons of the squid, it became possible to compare the voltages between several internal electrodes and a common external electrode. When several internal electrodes of dissimilar chemical composition were simultaneously inserted, comparison of the voltages was incompatible with known electromotive forces in any solutions. Here, again, the living system cannot be explained by the chemistry of solutions. Finally, Szent-Gyorgyi had found that dyestuffs that fluoresce in water and phosphoresce in ice phosphoresce in muscle and squid axons. The structure of the water in the muscle and its cells must, therefore, be of a form different from that of simple solutions. We are compelled to look at ice.
We are now acquainted with seven varieties of ice and have a fair crystallographic understanding of their structures. Around 1930 the first of these, ordinary ice I, had been analyzed. The structure of water was first discussed in the modern crystallographic sense by Bernal and Fowler in 1933. The authors were concerned with X-ray scattering, dielectric constants, and with the extra mobility of the hydrogen ion in acid solutions from a quantum-mechanical point of view. From that time on physicists have been studying the structure of water.
When ordinary ice, ice I, melts, only 14.4 percent of its shared hydrogen bonds are broken, and even at 37 °C (body temperature), less than a third are gone. Thus throughout this range there must still be bonded clusters of many molecules even if some molecules are separate. The bonds in such clusters endure on the average about 10^-12 sec. as estimated from dielectric relaxation. At 10^-12 sec. the molecules can only vibrate. Frank and Wen, in 1957, aptly described these clusters as flickering, in the sense that they are always rapidly forming and again breaking up. In 1960 Berendsen and McCulloch produced a model for such a cluster consisting of two parallel pentagonal faces and five warped hexagonal faces. This form would determine properties in conformity with what was then known as the radial distribution curves, coordination numbers, heat of fusion, and maximum density of 4°C. This shape puts little angular strain on the four bonds which naturally stick out like the spikes of a caltrop (at 109°28') as far apart as possible. The cavity of this cage easily accommodates an extra molecule of water. In this, these cages resemble the shells of water, called “calthrates,” which form around hydrocarbons to make, for instance, the slush in our gas tanks. Both the cages described and the calthrates, although they can be formed into larger structures in one or more directions, cannot fill all space like a true ice. It is important to note that the cages can be stacked pentagon to pentagon and can be attached easily to a small cluster of atoms in an ice I and even ice II configuration. This is certainly not the only model that fits most of the experimental data; Pople's (1951) and Von Eck's (1958) do it also, but the one presented here was constructed with biological considerations that will become obvious.
An important point to be made here is that liquid water is capable of so many possible flickering clusters that it can enter into various structures in many ways. Almost every measurement that has been made with sufficient precision over a large range of parameters, whether of pure water, solutions, or emulsions, if it could possibly depend upon the aggregation of water into structures, shows sharp transitions. This bespeaks some form of cooperative process. If the formation of a hydrogen bond between two water molecules promotes the hydrogen bonding to the complex, and it takes the breaking of many bonds to destroy the whole structure, we have a proper model. This is the familiar Ising problem of order-disorder transitions: If we were dealing with a simple string of O-H-O-H-O-H-O— with a short-range force linking one O to the next O, say 3 A, the transition would begin as soon as the distributing force, say thermal agitation, could break one link, and would simply grow rapidly with the temperature. This is exactly what does not happen. If, instead of supposing long-range forces, we continue to seek understanding of the stability of the structure by nearest neighbor relations, the geometry of the water structure must be more complicated than the simple O-H-O-H-O-H chain. The simple problem has now been solved for the square array in which each component is held in place by its four neighbors (but not for three dimensions). These structures are stable up to a critical temperature, “a melting point,” above which they break down rapidly. There is here a proper physical analogue of Shannon's famous coding ensuring information theoretic capacity—of which more later.
What happens when we add ions to water depends, in part at least, on the strength and form of their shells of water. Some will tend to break structure, some to strengthen it. K+ fits harmlessly in ice I; Na+ pulls the nearest water in too hard, producing a spiky shell that will not fit, and, consequently, this ion is expelled as water crystallizes. An Eskimo drinks a melt of last year's frozen seawater as happily as a New Englander drinks his applejack, for the alcohol and extractives are expelled from the ice of freezing cider and are tapped. Let us not here discuss the multiplicity of forces involved, or the particular role each plays in this expulsion. They are not fully understood. Similarly, we will neglect the changes in free energy, in entropy and enthalpy, etc., which are understood. They are the business of physics proper, and the physicists will solve them. They can be understood in terms of a drift to equilibrium at the lowered temperature.
Instead, let us move on to the relation of water to molecules which dissolve in water, the so-called hydrophilic molecules like sugar; those that do not, the hydrophobic, like oil; and those that have one end hydrophilic—called the polar end—and the other hydrophobic, called the nonpolar end, like soap. We call these amphiphiles. The attractions and repulsions of these molecules to water and to each other make structures called micelles, which constitute many parts of living cells. To understand their formation we must note that in the bulk of liquid water the molecules and cages are tumbling about, and, therefore, the electromagnetic fields due to their valence electrons are on the average equal in all directions. At and near to a surface this cannot be so, and the lopsided fields there produce not only surface tension but also a tendency for any substance that will render the change less abrupt to ac- cumulate on or between surfaces.
If one shakes a little oil in a lot of water so as to break the oil into such small clumps that the mixture looks like milk, he sees small spheres developing and then merging into larger spheres, for, when two touch, the surface tension of the water pulls the water out of the angle between their surfaces and squeezes them both into one larger sphere.
If we coat these spherules with a proper material that will lower the steep gradient from water to oil and stay at the interface, we have produced an emulsion. For this the housewife uses milk and white of egg in making mayonnaise—stabilized micelles.
Since the micelles we are most concerned with in animals, and above all in their nervous systems, are in large measure composed primarily of lipids, notably phospholipids, we should picture them carefully. Phospholipids are amphiphilic. They lower the gradient between water and air and accumulate there, with their polar heads burrowing among the flick- ering cages of water and their slippery paraffin tails writhing freely among each other. The present guess is that the bonds of the head to water endure some 10^-7 sec—or 10^-8 sec, which is a thousand times longer than the bonds of water to water at body temperature. When two layers of such amphiphiles come tail to tail, their hydrophobic tails fit rapidly around each other and their heads stick into the surrounding water. Such double-layered striations and their formation into sheets, tubes, and semisolids have been elaborately studied and are comparatively well understood, for they are of great importance to the soap and detergent manufacturers who are at the present moment under heavy fire for the untoward consequences of stable detergents producing foam and killing bacteria—as well as birds, amphibia, and fish.
Structurally, proteins are the most important protectors of double-layered micelles of phospholipids. Think of these proteins as long peptides having C's double-bonded to O, and nitrogens which, if they are not buried in binding peptide to peptide internally, are accessible for formation of strong hydrogen bonding to water. The proteins important in coating and stabilizing membranes are those which can spread themselves as relatively flat plaques on the hydrophilic heads of the micelles. They are again to be expected there because they lower the gradient of the field at the surface of the micelles where they spread themselves. Neither the protein molecules nor the phospholipids are all of one kind, nor does any one necessarily keep its shape or stay in place very long. The protein molecules are lumpy and probably wrinkled and can stretch and shrink. Both the angle of bonding of its peptide chain and X-ray microscopic picture suggest that each is roughly hexagonal, and there seem to be holes and cracks between them. More protein layers may form upon the first, and layers of other materials, say myelin, may be added.
The thinnest membranes of nerve cells that have been well studied in the large cells of the sea hare have a capacity of forty microfarads per square centimeter, and the thick walls of the giant axon of squid have about one microfarad per square centimeter. These fit well with the known dimensions of the membranes. Calcium seems to be the ion of greatest importance in preserving this structure, for when it is totally removed the membrane of nerve goes to pieces rapidly. To this we will return later. Suffice it here to say that no one is completely sure as to the exact structure of the membrane of cells or its precise relation to water bound to it or to the detail of its action in the many roles it assumes in the living cell.
As opposed to our present ignorance of protein in membranes, the structure of the protein collagen was well known in 1960. It is a stable extracellular fiber, composing tendon. It consists of three peptide spirals forming a helical cylinder. The distances along this cylinder at which hydrogen bonding could occur were known from infrared studies, and the distances between cylinders, from X-ray diffraction. Therefore, Berendsen and McCulloch decided to investigate its hydration. Because the distances along the cylinder were six times the distance of ice I and the angles around the spiral were 36°, requiring a tenfold symmetry of structure, the hypothetical water shell was proposed and a crude model built utilizing the cage they had found the simplest that fitted the facts for water as described before. Long and careful nuclear magnetic resonance studies of collagen fibers at various levels of hydration and at various angles with respect to the magnetic field by Berendsen formed the foundation for his justly lauded Ph.D. dissertation.
Berendsen has continued his research on fibrous material to include keratin and silk fibroin and found that water that is attached sits at an angle askew to the fiber direction. Finally, he studied deoxyribonucleic acid, DNA, from sperm, finding that its water bonding is at right angles to the fiber axis, which may account for some of its strange properties in ordered solutions. So far collagen seems to be unique in forming its principal chains parallel to the axis. But the 36° repeating angle around the axis is common to it and to DNA and suitable to pentagonally faced cages of water.
THE SHAPE OF LITTLE THINGS ACTIVE IN THE LIFE OF CELLS
Phospholipids, proteins, nucleic acid, and probably stringy molecules composed of protein and sugars—called mucopolysaccharides—are the macromolecules whose micelles with water constitute the structure of animal cells and their organelles. We may sketch the modern picture of the cell best by saying that we think of it as a volume enclosed by a membrane from which there extend, into the cell, clefts and tabules continuous with it, much as our sinuses and our gut and our lungs extend into the overall man and are continuous with his hide. Many of the organelles, say mitochondria and ribosomes, seem to be attached to these intrusions of the cell membrane. These may even extend into the nucleus despite its surrounding membrane.
The space around the nucleus of cells certainly has some sort of structuring—fibrous and membranous and microsomial—chiefly micelles protected by protein. During life these are all in motion. Particular organelles may be long-lived or short-lived, but the macromolecules and their water of hydration are certainly in flux. The individual neurons of our brains may live as long as we do, but their protein molecules have mean halflives measured in hours or, at most, days. Everything important to the cell must be rapidly replaced. In this, the internal structure must keep the process organized. The relatively small protein molecules which we call enzymes must be attached to the mitochondria in such positions as to be able to couple the reactions they catalyze. For example, so long as they are intact even in vitro, they can couple the oxidation of sugar, ending in HoO and COo, to the phosphoralization producing adenosine triphosphate (ATP) whose energy is required for muscular activity and nervous conduction. Simply to smash the mitochondria destroys the coupling, and oxidation proceeds without phosphoralization. The material of which mitochondria are made is tough and inert. The same is true of ribosomes, neurofibrils, and the sheath cells of Schwann that form myelin sheaths around the axons in peripheral nerves. The best guess yet is that their endurance is due to the nature of the molecules of which they are made, say, complexes of sugars and globulins or mucopolysaccharides, perhaps hyaluronic acid, for it is attacked by hyaluronidase. Unfortunately, although hyaluronic acid can be obtained in large quantities relatively pure from the vitreous humor of the eye, we can find no nuclear magnetic resonance (NMR) study of its binding of water. We suspect that hyaluronic acid organizes water because in the somewhat dehydrated state it prefers K+ to Na+ ions.
Because we are interested in the functional nervous system in which neurons do not multiply after we are born, it might be supposed that we could neglect DNA and its role in the production of RNA which synthesizes proteins. Curiously, this is not the case, but were it so, the story of genetic coding is so exciting at the moment that we would deal with it here for its own sake.
We are decidedly not concerned with cell multiplication in its detailed procedures but only in this: that chromosomes, which carry genes as determinants of phenotypes, are loaded with DNA, and that it is DNA that carries these genes. The nucleoprotein of the chromosomes is spun out into fibrous molecules in the nucleus of cells. Each molecule has a central string of ribose phosphate with four kinds of organic bases attached at each juncture. Two such spiral fibers form the helix of the whole molecule. They are put together head to tail to permit their hydrogen bonding. It has been known for some years that our genes are encoded in the sequences of organic bases on these spirals. Three adjacent bases can spell sixty-four words, each of which might specify any one of sixty-four amino acids to be hooked each into its place in the peptide chain of a protein.
Actually, all living things, plants and animals, apparently use at the most only a much smaller number of amino acids, all of which rotate a beam of polarized light to the left. For animals the number is probably twenty. So the code is redundant. There is more evidence to the same effect. To understand this we must look to the process of reading the code in the assembly line of protein synthesis. When cells are multiplying, they must replicate enough DNA molecules to supply both daughter cells, but at all times the DNA must replicate at a rate sufficient to keep up with its destruction. To replicate, each molecule of DNA separates its double helix into two strands, each of which produces a new partner. To be read, each strand of DNA makes a long molecule of RNA that peels off and wraps around a ribosome, and makes short RNA molecules that bring amino acid to the proper place to be attached by the aid of appropriate enzymes. The second evidence of redundancy of coding comes from this, that the composition of the protein molecule produced is less variable than the composition of RNA which is less variable than the DNA, but, for purely technical reasons, this evidence is not as convincing as that from the number of possible code words (sixty-four) and the number of amino acids—say twenty.
To solve this coding problem, one needs first to know the sequence of amino acids in the peptide chain of the protein and then match it with the sequence of bases of RNA in the assembly line. The procedure in each case is to fragment the chain into small enough molecules so that one can determine the sequence in each moiety. By use of proper agents which attack primarily linkages between particular amino acids or particular spots in the nucleic acid chain, this can be done. The first protein molecule of reasonable size for which this was accomplished was insulin. The one of greatest importance here was hemoglobin—a really large one.
There is a hereditary disease called sickle-cell anemia which is of great survival value in areas infested with malaria and the proper mosquitoes to inoculate man, for the red blood corpuscles of sickle-cell anemia are resistant to the parasite. To receive the factor from one parent imparts the immunity without much trouble to the patient's circulation, but to inherit it from both parents leads to severe difficulties. The simple factor produces one wrong amino acid introduction at a single place. The double factor leads to a double error. The iniquitous amino acids and their positions had been identified and located, and the ribonucleic acid was under investigation, and G. Gomow and F. H. C. Crick of Cambridge University, working as one does in cracking a code, had ingeniously proposed matchings between sequences of amino acids in relation to the bases of DNA, when M. Nirenberg of Columbia University beat them to the draw by synthesizing short chains of nucleotides and demonstrating their effects on peptide synthesis. The size of the problem can only be realized by noting that there are four kinds of nucleotides, and some thousands of them in one DNA molecule make some five billion nucleotides in our chromosomes. This picture of the structure of these molecules suggests a proper way of looking at mutations produced by crossovers, as Morgan supposed, and by accidents due to radiation, etc., and by invasion by viruses. Viruses themselves are either DNA or RNA with some shell of protein usually present at the head end which gains admission to a cell and there utilizes the mechanism of the host to make more in its own image. Without its host it has no metabolism and no reproduction, and its motion is at the mercy of forces not of its own making. It is scarcely alive. Thanks to H. Fernandez-Moran's low-temperature electron microscopy, the bacteriophage T2 is visible at a magnification of 525,000 which even shows partial hydration of its tail.
We will learn much of viruses in the near future because their use against pests destroying our crops is being seriously considered. Moreover, there is a high probability that they are responsible for a few varieties of cancer in animals, and there is a possibility that they may produce cancer in man. Finally, it is likely that in the next year or two man will synthesize a virus, as near as he is likely to come soon to making a form of life.
At the moment the study of virology has raised a new question in immunology. There seem to be many kinds of cells which, when they are infected with a particular virus, make something called “interferon” which they pour out into the surrounding medium whence it reaches other cells of their own kind. These it renders relatively immune not only to that virus but to many other kinds of virus. This is the reverse of what we would expect, for it is host-specific instead of invader-specific. Whereas in most immunity an antigen induces the production of an antibody which is antigen-specific but not by any means host-specific—so we transfer tetanus immunity from a horse to a man. This may point to a difference in the reaction to proteins in the ordinary case as opposed to reaction to nucleic acid structures, DNA or RNA in the case of “interferon.” The former is a fast reaction, say less than an hour. The latter, slow, say a couple of hours.
What one sees in the reaction to the foreign protein is a production of RNA which builds up rapidly to a high concentration within half an hour. This betokens a high rate of protein synthesis. We know that the protein formed in the presence of the alien protein is somehow templated by it to fit it like a key in a lock, so preventing it from damaging the host. Thus each protein is responsible for one specific shape. Moreover, once the cells have learned to produce this antibody, they continue to do so for very many generations of cells. This learning at the protein level suggests it might be worth our while to look at it for an explanation of higher levels of organization, and that leads us back to our problem, for it is now considered a significant factor in producing those traces of former activity underlying long-term memory.
MEMORY
Ciraik described nervous activity as a modeling in the brain of the world, including the rest of the body, surrounding the brain. There is evidence now that this modeling begins before we are born and that, in the newborn, when sensory signals of any one sort are excluded, structural changes of neurons, not merely functional changes, occur. There seems no reason to doubt that the models in healthy brains are updated as long as we live. Fundamental to this modeling is memory, an activity of which we distinguish three phases. There is a translation of the input into an enduring process. There is the preservation or storage of that process, called the trace, in the brain, and there is the reading out from the trace to affect future conduct.
To speak thus of memory as a process is somewhat misleading. We know of many ways in which traces of previous activity persist. They differ in the manner in which they are produced, in the duration for which they endure, in the way they affect subsequent activity, and in the effect of subsequent events upon these traces. The briefest of these was studied by Lorente de No, who called it the “period of latent addition.” When two stimuli, each insufficient to fire a neuron, arrive at neighboring points on the neuron within 120 microseconds, they sum at full strength. If they are separated by 200 microseconds there is no addition. There is a delay of about a millisecond from the time an impulse arrives at the effective junction, or synapse, between one neuron and the next until the next one fires. When a neuron has been fired, its threshold is altered for durations measured in milliseconds. There are delays in conduction greatest in the smallest conductors of signals. Thus, the history of firing affects the conductors. If this is taken into account, there is then a possibility of storing information by activity in closed paths, as first suggested by Lawrence Kubie in 1930.
Lorente de No showed the importance of persisting activity in those loops of neurons producing nystagmus, which, as long as it persists, causes us to attribute rotation to the world. We cannot put an upper limit on how long such activities persist. It takes about a day to lose one's sea legs after a week at sea. Activity in such regenerative circuits, by preserving the figure of signals received from their inputs, produces at later times that same figure of signals to affect subsequent nervous activity. Injury to nerves causing electrical cross talk between outgoing and incoming fibers produces a disease called “causalgia.” In this the patient suffers a burning pain in the part affected and requires surgical intervention to open the regenerative loop. Early in the disease this can be done by section of nerves. But later, closed loops become regenerative in the spinal cord and eventually in the brain, requiring surgery there. Unless the regenerative disorder is stopped by cutting the loops at the proper places, the regeneration persists as long as the patient lives. If the wound of the nerve is promptly handled so that it heals properly, it is often possible, either by blocking conduction temporarily or by throwing in enough other stimulation, to stop the regeneration and so cure the patient. Since this is useless later, we suspect that the persistent activity has produced an enduring structural change; that is, a change in the persistent activities that preserve the properties of neurons. This, then, is a pathological example of the making of a trace. The same process may well underlie normal memorizing. Whatever that process is, it must affect the future behavior of the neurons involved so as to alter the circuit action of the system.
In 1961 Hyden developed a skill with micropipettes that enabled him to suck out the contents of a single neuron and a microchemical method that enabled him to analyze the tiny sample quantitatively. By comparing the contents of normal neurons with those involved in a conditioning process, he could show a great rise in their content of ribonucleic acid. Its production is not instantaneous and requires something like half an hour to reach its peak. Thereafter, its concentration slowly decreases. This leaves little doubt but that protein synthesis follows the same curve in these neurons. Hyden, therefore, proposed that this process, like that of antibody formation, produced a highly specific protein in some way related to the specific activity required in the conditioning process, as an antibody is specific to its antigen. His work has been amply confirmed by simply staining for RNA at the proper time, but many workers doubt his conclusion as to the specificity of the protein, for they do not see how the figure of excitation could serve as a template or why that would be necessary.
For some years now the Worm Runners Digest has delighted us with the work on the flatworm which now is definitive as to the effects of ribonuclease (which destroys RNA) in preventing conditioning and of extra RNA in promoting it, and that the conditioned worms do have a higher concentration of RNA. Next it was shown by several good observers that anything that prevents protein synthesis prevents conditioning. The difficulty was that under such treatments all animals were very sick. More recently it has been possible to stop the incorporation of particular amino acids and so prevent the formation of particular proteins while the rest of protein synthesis goes ahead properly, and one method has proved useful in the mouse, for it prevents conditioning. Therefore, it seems likely that it is some particular protein that must be synthesized. If that proves correct, then, by tagging the amino acid with tracer elements, we should know precisely which proteins are being made and where they are distributed in neurons. This would give us a better guess as to how the newformed protein affects the circuit action of the affected neurons. The distribution of the neurons involved is of great importance to our understanding of how the process underlying the conditioning is distributed in the brain. Certainly not all the neurons in any given area are so altered. Even in the case in which, by chemical irritation, one sets up an epileptic focus on one temporal lobe which, by constantly firing to the opposite temporal lobe, sets up a secondary focus, which is then fixed and stained, only a fraction of the cortical cells of the new focus are loaded with RNA and these are scattered.
Except, possibly, for a very few very drastic stimulations that produce conditioned avoidance by a single experience, if all the activity of a brain is suddenly halted by chilling it or by an overwhelming anesthesia within the first quarter hour or often in the first half hour after the conditioning, no trace can be discovered. So even those enduring, or long-term, traces require an ongoing activity of some sort of reverberation for their consolidation. At present the search is on for their form and place in the brain, but as yet there is no general agreement as to what to look for or where to look for it.
That there are many varieties of processes responsible for memory is clear from the curves of forgetting. First, that which is carried only in reverberation disappears with its cessation. Second, learning of nonsense syllables produces traces with a mean half-life of half a day, and frequent repetition of recall reinforces the fixation bettering the performance; whereas, third, with skilled acts frequent repetition produces poorer performance, but after a long period of cessation the skill is so much better that it seems, as James says, that “we learn to swim during the winter and to skate during the summer” (GBWW, Vol. 53, p. 71b); and it has been suggested that this is due to a forgetting of the many little errors we accumulate during too frequent practice. Fourth, there is incidental memory of events each of which occurred only once and have never been knowingly recalled in the interval, like scenes from our childhood remembered in old age. Whereas the second and third can be explained by preservation of the trace with a mean half-life of half a day, which, as Halstead pointed out, is comparable to the mean half-life of protein molecules, the fourth requires some regenerative process whose energetic requirements must be considered, for no addition of half-life curves ac- counts for such endurance.
There is a disease of old age called presbyophrenia in which the patient preserves his judgment and his former memories but can make no new trace outlasting his reverberating activity. That is, once his attention to a topic ceases, he cannot recall it. Bilateral injuries to the oldest portions of our cortex produce the same deficit. Hence, it is clear that, though this is not the place, or the only place, in which persist the traces of long-term memory—for the old ones remain—it is in some unknownway able to issue some order to reverberate or, perhaps directly, to re- cord ongoing items of importance. The difficulty with the first of these notions is that it requires a reverberation distinct from the process of keeping in mind the concurrent context of what the patient is still considering. The second presupposes only some signal, a so-called To-Whomit-May-Concern message, to start protein synthesis in the cells then active in the conditioning process. While there is much current experimental work on these questions, we know of no one so bold as to assert that this message is definitely an electrical signal or definitely a chemical signal, or even that it goes directly rather than indirectly to the tissues involved. We do know that the hippocampus is concerned with olfactory and gustatory as well as somatic inputs and with activities of feeding and mating, etc., in all of which there is a strong feeling or affective component. Since these do, in large measure, determine conduct, it is not so strange to think of the evaluating activity determining what is to be remembered, say by protein synthesis in neurons. When we recall the extent of recovery of function over the years that Kliiver and Bucy found following more inclusive lesions of the brain, we are inclined to doubt that the hippocampus is the only portion of the brain that determines the making of these traces. It will probably take years of careful study with animals operated on in infancy to be certain of the answer. Of our friends, Paul MacLean, working on monkeys, and Turner McLardy, on rats, are already under way in these problems, both as to the mechanism of hippocampus messages and the recovery of function after its destruction. Ultimately it must depend upon the growth of other structures that in some sense have paths around the destroyed tissue.
It would be strange indeed if the law of growth with use and atrophy with disuse or abuse, which is true for all other cells, did not also hold for neurons. In young brains the branching of neurons seems to be relatively diflfuse, but as brains age many of the smaller branches tend to disappear and some of the larger tend to grow. How much of this can be attributed to mere maturation and how much to learning is a moot question. In either case, the diffuse branching should subserve the relatively diffuse or generalized response that is characteristic of the young and inexperienced, whereas the sparse, coarser connections should mediate the more specific and less flexible performance characteristic of the aged and experienced man.
In such a process, the sequence of events should determine a sequence of structural changes, for each change is one affecting the structure already created. The new is but a modification of the old. Thus, both in maturation and learning, the sequence of events may well determine dissimilar structures serving the same end, somewhat as two trees differently pruned as saplings may develop much the same crown to optimize their posture to sun, wind, and rain.
The consequent relation of many forms to one function makes it difficult to guess what similarity of anatomical details correspond to any given example of acquired function.
The story of the increase in strength of excitation from one neuron to another with increase of simultaneous activity divided by the sum of the frequencies of their independent firing, and the inverse story for the strength of inhibition, by Albert Uttley at the National Physics Laboratories, Teddington, England, suggests at least one place to look carefully for anatomical change in mammalian cortex. But even this cannot remove the difficulty inherent in the many-to-one relations of structure to function.
One question concerning human memory that disturbed physicists (including Schrodinger) in the 1940's is its great capacity. Information the- ory first enabled them to estimate its storage in bits. In 1948 Heinz von Foerster published his Das Geddchtnis, wherein he supposes an access time—to write in or read out of the store—of the order of one millisecond over, say, a few million fibers, and a mean half-life of a trace, say, half a day. And he asks how big the capacity would have to be to be in dynamic equilibrium with its access with such a rate of loss of traces. He calculates it should be somewhere between 10^12 and 10^14 bits. From the mean half-life he computes the size of the potential hill over which some items will jump by chance at body temperature to be 2.3 electron volts. Even if we suppose some process to regenerate a few percent of our traces as long as we live, on the foregoing considerations von Foerster computes the amount of energy required for regeneration of all traces to be much less than one percent of the twenty-four watts, which is the best figure for the energetic requirement of a real brain. This we estimate from the heat it produces or from its oxygen consumption or carbon dioxide production. Thus energetic considerations can easily accommodate even this capacity of memory.
The most generous estimate of its size from a psychological point of view that we know of is that of John Stroud, who will allow us no more than ten snapshots per second and not more than a hundred bits per picture for some sixteen hours per day and all traces to endure as long as we live. This amounts to less than 10^11 bits. Most of our psychological friends would settle for a far lower figure. This generally rests on the contention that we could make no use of it, for a man can ordinarily process no more than twenty-five or twenty-six bits per second from his input to his output. They generally point out that we seem to do more because we organize our activities hierarchically, so that the higher trips off a train of lower performances.
Lashley was more concerned with those aspects of memory which could be conceived as cerebral waves of activity which he imagined as the basis of behavior. Frequently his opponents asserted that the brain was incapable of a sufficient number of modes of oscillation. Let us suppose the worst case, namely, that all neurons were driven by a clock pulse such that they could only change from state to state or stay in the same state at each pulse of the clock. It is easy to show that a collection of N neurons can then show 2^N-1 modes of oscillation under constant zero input according to the particular state they were in when the input dropped to zero. So one hundred neurons could exhibit 10^33 modes of oscillation with constant zero input.
If we will settle for fixed inputs other than zero over a sufficient number of input lines, then the number is fantastically greater. Karl Schnabel has recently shown it to be exactly
which is more than (2^N-1)! Hence, we may forget this limitation. Finally, if the neurons are not clocked, and have adjustable parameters, the number of modes of oscillation of even two or three neurons is essentially infinite, as Leon Harmon has demonstrated with his neuromimes.
TIME
Time is of the essence of all living things—and we are wont to measure it in seconds, as Galileo did by counting his pulse. The scale we need stretches from light waves we distinguish, say 10^-14 sec, to evolutionary epochs, say 10^14 sec. Throughout the range from 10^-11 sec. for the mean half-life of hydrogen bond in water to 10^9 sec. for the mean halflife of a man, we encounter rhythmical activities whose frequencies are fairly constant under surprisingly large ranges of events we would expect to affect them, like changes in acidity, oxygen supply, and even temperature. Other rhythmical activities, like the pulse or respiration, are very responsive to the requirements of the organism, and usually have a specialized control loop to keep them constant under normal conditions and to adjust them to their load under stress. Finally, there are some that, when free running, may be constant, or nearly constant, at a rate near that of some external cycle and can be locked into phase with it. These are the famous Circadian rhythms, locking in with daylight; the menstrual cycle, which Dewan has found locks in with nocturnal illumination, as it probably does with the phases of the moon; and finally the seasonal rhythms of fertility clearly demonstrated by the behavior of most wild animals, as well as the domesticated sheep and goat.
At the present moment, there is an exponential growth of empirical information as to a whole spectrum of each of these types of rhythms, chiefly as to those having periods between 10^-14 sec. and 10^7 sec. Fundamentally, they are open systems, open to matter and to energy and, in many cases, to information. Moreover, they are so far from thermodynamic equilibrium that despite Prigogne's valiant efforts, present reliable theories cannot cope with them. From a mathematical point of view, the behavior of these systems is so far from linear that the intuitions of the physicist lead him astray. For example, one naturally thinks that the average temperature of a mass of gold hung in a calorimeter will approximate the mean of the oscillating temperatures of the bath surrounding it, whereas it must be hotter, for radiation depends on the difference of the fourth powers of the temperatures of the sources and sinks.
There can, of course, be no general solution of the equations of non-linear oscillations, but a few years ago the great Russian works on this score were translated into English, including the famous Second Method of Liapunov which gives us topological insight into the behaviors of manysuch functions.
Most of us who are interested in the theoretical understanding of a neuron think of it as a nonlinear oscillator, either spontaneously active at a rate modulated by signals it receives, or else awaiting a signal to trigger it into a single response or into a train of responses constituting its signal to subsequent neurons. In either case, it is the configuration of its impulses in time that conveys the message. In any particular case one has to discover whether it is the mere occurrence of a single pulse, or a repetition rate, or, more generally, pulse interval or even the total number or duration of the train that codes the information. In some cases it is known,and eventually it will be determined for the rest. The only difficulties here are experimental.
It is quite otherwise when we come to the coupling of oscillating systems. It is bad enough with two systems, practically impossible with half a dozen, and even limit theorems are few and far between when we turn to large numbers for statistical simplicity. In epilepsy we do suffer from the locking-in phase of billions of neurons. The marvel is that living systems, brains included, can enjoy so many unlocked oscillations. One thing that makes this problem difficult is that one does not deal with the action of one upon another followed by the action of others upon it but with concurrent actions of each on all and all on each—whereas we must think these actions through sequentially. One wants a vectorial mode of thought that we still lack. Most of what we can say comes from the obvious fact that two coupled nonlinear oscillators cannot run at almost the same frequency. Hence, the spectrum of frequencies of our linked oscillators cannot be continuous. It is this which imparts to coupled neuromimes with adjustable parameters the discreteness of their modes of oscillation. The same holds for real neurons.
NEURONS
Neurons are perhaps a tenth of all the cells of the central nervous system. The rest are the supporting cells, or glia. They completely surround all blood vessels, even the capillaries, and cover the surface of neurons except for the junctions of neuron to neuron, the synapses. Nothing can reach a neuron from the bloodstream or leave the neuron to go to spinal fluid or blood except through glia. They are of many kinds, with dissimilar functions. Aside from their functions in the metabolism of neurons, they are responsible for two items of electrical importance in our understanding of how brains work. Glia seem to have a rather low specific resistance internally and so constitute formed conductors of current in the volume surrounding neurons. Others function as insulators by wrapping around some taproots, or axons, of neurons, producing the myelin sheaths which are alternate layers of glial membrane, outside surface then inside surface followed by inside surface then outside surface. From here on we will neglect all of their other properties except to note that when neurons are transected or die, glia clean up the remnants and form scars through which no neural branches can find their way.
There were thought to be some 10^10 neurons in the human nervous system, but the estimate is continuously increasing. They are all there shortly after birth, but they are not all fully matured, particularly in the phylogenetically newer parts of the brain. They continue to change their form and the extent of their myelinization as long as we live. As we age, they die faster and faster. They always have a very high rate of protein metabolism, whether for growth, memory, or repair, and a high carbohydrate metabolism as a source of energy for their characteristic activity in handling information. Both of these are carried out chiefly in the body of the cell and shipped along its fibers. The nucleus remains in the cell body.
For the sake of description we will divide neurons into three classes. First, there are the so-called afferent peripheral neurons, because they bring signals from receptors to the central nervous system (CNS). They are bipolar in the sense that the two ends of the cell may look much alike. The shortest in man connect the photoreceptors of his eye to the ganglion cell layer of his retina, which is really a part of his central nervous system. The longest have one end in the great toe and the other at the base of the skull while their cell bodies sit outside of the spinal cord in the so-called dorsal ganglia. The biggest of these cells have myelin sheaths on both the process leading to the cell and on the part leading from the cell to its destination in the CNS.
The second class is called the efferent peripheral neurons, because they send signals from the CNS to muscles and glands either directly or, in the so-called autonomic system, by a relay in the sympathetic ganglion or in the organ they affect. The biggest of these are the motor neurons, with cell body in the ventral portion of the brainstem and spinal cord. Their receptive branches, or dendrites, fan out around the body, and the taproot, or axon, goes out by the ventral root. The larger axons have myelin sheaths, the smallest do not. These axons, with the afferent fibers of the dorsal root ganglions, constitute our peripheral neurons. The remaining neurons of the CNS are called internuncials, because they receive signals from neurons and deliver them to neurons. These range in size from the tiniest, whose thin axon does not leave the shadow of its short dendrites, to giant cells receiving nearly a quarter of a million inputs and having thick myelinated axons more than two feet long.
By 1908 Ramon y Cajal, using a capricious method that stains only some cells, but those often almost entirely, was able to locate and describe almost all types of neurons to be found in man. It was he who proposedthe so-called neuronal hypothesis we now take for granted, that all fibers in the central nervous system are parts of neurons, which are the cells having nervous impulses.
When the bodies of neurons are destroyed, the axon dies and, if it has a myelin sheath, the myelin changes so as to pick up osmium. This has enabled us to trace axons for great distances but not their final terminations. For this we have now the Nauta stain. These methods, supplemented by a host of other stains, have given us a detailed knowledge of the anatomy—as it were, the wiring diagram—of the nervous system, whose full description would fill at least a dozen volumes today.
With the advent of electrical amplifiers about 1930, two new methods were employed by neurophysiologists to discover what structures are connected. Both use the nervous impulse. The simplest is to stimulate an axon and find out where its impulses arrive. Since the impulse goes in both directions, this method does not tell us which end is which. The second method is to strychninize a small group of cells, which then fire nearly synchronously, and to locate the places where their pulses arrive. Except for bipolar cells, which are notorious for firing backward, this method does not produce impulses ascending axons and hence does tell us which end is which. It has been most useful in showing the directed connections of each part of the cerebral cortex with other parts. Both of these methods are relatively rapid, for neither requires us to trace the intervening path, and the crucial anatomy of connection is deduced from
the action at a distance from the origin. The tract must be there because the impulse is transmitted.
The newest advances in neuroanatomy are due to electron microscopy and are of greatest importance to us in understanding the details of impulse transmission to which we turn next.
Picture one neuron with its dendrites and a naked axon taking off from a hillock at its base. It is at or beyond this hillock that the propagated impulse starts. We will first consider the axon, above all its membrane, having a layer of protein molecules outside a double micelle of phospholipids, tail to tail, and an inner layer of protein, probably an unusual one. Inside the axon, there is a net streaming of proteins, perhaps globulin polysaccharides, from the body toward the end of the axon. Metabolism produces about a tenth of a volt, positive outside, negative inside. Themembrane has a high, essentially fixed capacity, measured in microfarads per square centimeter, and a high resistance as long as the voltage gradient is maintained. Ca+ + ion is necessary to the integrity of the membrane. Ca++ is displaced locally, say by an electromotive force, local disordering occurs, and the resistance falls locally. Na+ with its water shell then rushes in, and later K+ leaks out. This allows current to flow from the outer adjacent membrane into the depolarized spot and from that spot to the next adjacent membrane. This process partially depolarizes the adjacent membrane, and it becomes so locally disordered that Na+ enters and K+ leaks out at the new spot. Thus, the disturbance, called “the nervous impulse,” propagates along the axon. Behind this moving front, Na+ is extruded, the voltage is reestablished, and K+ collects inside until its concentration is enough to balance the voltage gradient. This is a deliberately oversimplified description of the impulse—of which we have known much more, thanks primarily to the pioneering quantitative experiments of Hodgkin and Huxley and secondarily to what was probably the first application of a big digital computer to a crucial physiological problem. Actually, the full equations are of too great complexity for simple consideration and call, for example, for an overshoot so that the voltage of the impulse is greater than the original voltage through the membrane, which is what happens. Here again we are confronted by too many possible physical explanations, or by a lack of experimental data to exclude all but one of them, but the requisite data is to be expected soon from work on synthetic membranes.
The next step on axonal membranes came from Lettvin in our laboratory. He argues that if our notion of the icy structure of water in the holes of membranes was correct, then a positive ion with a shell of bound water like that of calcium should replace it, and, if it had a higher valence, it should stick there and prevent excitation or conduction of the impulse. He found the ion, lanthanum, with a valence 3, and it did what he predicted. What is more, it is opaque to X rays and hence by X-ray microscopy tells us where the Ca++ normally sits in relation to the membrane. It does not get inside the cell. Finally he clinched the argument by finding an ion whose mobility, etc., is much like potassium with its water shell, but with no such shell. It is cesium, and it has no effect on the impulse. Thus, by every test to date, the structure of water is crucial.
Since the propagation of the impulse requires that the voltage of the next capacitative spot of the membrane must be pulled down to a critical value, i.e., to the threshold for disordering the structure, which is about two-thirds of its resting value, the velocity of propagation will depend upon the distributed capacity, the distributed resistance, and the distributed source of power. For the fattest axons this is over a hundred meters per second, and for the thinnest, well under one meter per second. The largest are myelinated, which, as an insulator, increases the speed of conduction. The sheath is discontinuous, having small gaps called the “nodes of Ranvier,” and propagation in these axons is saltatory, each node firing the next. We used to think of the propagated impulse going all the way to the termination of the axon. While the factor of safety for cylinders of reasonable diameter—say down to somewhat under one micron—is more than sufficient, this cannot hold where an axon divides, given both an increased resistance and an increased capacity. Here there is an even chance of failure of propagation, depending upon local conditions, particularly on the local distribution of sources and sinks of current due to the activity of neurons not too far away. We have been able to demonstrate such blocking even at the place where the larger axons of afferent peripheral neurons first bifurcate in the spinal cord. We know it to be a normal mechanism of blocking impulses, for we can produce it by pinching a paw before stimulating a dorsal root. It can be overcome by hyperpolarizing the afferent peripheral neuron, say by strychninization. It is doubtful whether any fully propagated impulse normally arrives on the next cell. It may do so with a decrement, that is, decreasing in amplitude and in velocity, or even that may fail and only electrotonus reach the end of the fiber.
When we consider a relatively thin axon approaching a neuron from the vicinity of its axon hillock to end by fine branches among its dendrites, and we realize that as impulses die in these branches they leave a sink of current there with a source near the trigger point, we see that it must raise the threshold of the neuron and that this effect will be increased by strychnine. Rapid repetitive firing of any axon must cause its impulses to die sooner, and in this case the repetition may eventually cause the impulse to die near the axon hillock, and the originally inhibitory effect may give way to facilitation or even excitation. None of these effects depend upon specialized neuro-neuronal junctions called “synapses.”
One must not think of the voltage through a membrane as if it were due to a static charge on a condenser. It is the result of two currents flowing in opposite directions. Every current through a resistance produces random fluctuation with the same energy in all frequencies. This is commonly called “white noise.” Nodes of Ranvier have high resistance and consequently, at room temperature, produce white noise. Professors VerVeen and Dirksen, working in Leiden, have been making very precise measurements of the noise of nodes of Ranvier. For very high frequencies the noise will, of course, be capacitatively lost. Below this down to a lower frequency the noise is white. Below that frequency there appears noise whose energy is inversely proportional to the frequency, called “1/F noise.” The frequency at which this begins may be determined in part by the diameter of the fiber but is principally dependent upon the potassium ion. l/F noise is very familiar in vacuum tubes and transistors, but we are not aware of any satisfactory physical explanation in any one case. Somewhere below ten cycles per second this gives place to 1/F2 noise, phenomena, which they suspect may be related to Ca+ + , but we have seen no publication on that score. Since thermal noise increases with increase of resistance, it must increase as fiber diameter decreases. From VerVeen’s work it is clear that it is negligible in the giant axon of squid, reaches say 1 or 2 percent in axons of say five microns diameter but must exceed 30 percent in finest axonal twigs, and hence must fire them occasionally. For this reason, if for no other, we will have to look to how computation can be carried on reliably by noisy components.
Such local flickering of impulses in twigs even of a single axon could scarcely be expected to be sufficiently in phase to sum and produce an impulse up the axon. Even if it did, the impulse would not invade the cell body under any normal circumstances. The capacity of the axon is too small for the capacity of the cell body. Thus conduction of signals is ordinarily from axonal terminations to dendrites and cell bodies and thence out the axon as a propagated impulse. From its occurrence in the reflex arc, in afferents, through the central nervous system and out efferents, this is called orthodromic conduction. Antidromic conduction can and does occur in some diseases, but it is rare and we shall neglect it. Transynaptic antidromic conduction is unknown except perhaps for the afferent peripheral neurons.
There seems little doubt that dendrites do affect neighboring dendrites, but as yet there is no convincing picture of a specialized connection. Axo-axonal synapses are relatively rare. The best known is on the giant axon of the squid and is excitatory. It behaves like an ideal diode, conducting forward into the giant axon with a typical diode curve but permitting no reverse conduction. Unfortunately we have no electron-microscopic picture of it.
Axodendritic and axosomatic synapses are plentiful, and there are many new beautiful pictures of them. They are essentially alike. The pictures show two clearly dissimilar groups of axonal terminal structures. Unfortunately, while we expect two—one excitatory, the other inhibitory—we do not know for certain which is which, nor do we know how to interpret these pictures in terms of how the apparent microstructures act.
Excitation, whatever the electrical or electrochemical process, must push Na+ + into the recipient neuron. It applies an electromotive force directed inward. An inhibitory synapse does not do the reverse but merely lets current flow out, presumably by K+ leaking. Thus it divides an applied electromotive force by shunting it with a resistance to leak it off. This distinguishes it from interaction of afferents or rise in threshold of the recipient neuron which are subtractive.
We came on the problem of divisive inhibition and its geometrical relation to excitation as a result of work on What the Frogs Eye Tells the Frogs Brain. In the frog’s retina we were confronted by four well-defined types of ganglion cells whose axons go to the superior colliculi—called optic lobes—where they map four functions in four layers so that the maps are in register. The frog’s rtinal ganglion cells computed four dissimilar functions of the visual input. Our problem was to assign a particular kind of function computed to a ganglion cell with a particular dendritic tree. Lettvin and Maturana made the assignment, first by noting that two types of cells were small, gave small impulses which were slowly conducted, and hence two functions could be identified as rising from this pair of types. The other pair were large, with larger impulses which were rapidly conducted and hence the other two functions had to be assigned to them. Next, the size of the field seen by a single ganglion cell could be used to assign functions properly to the two small cells. This established the sort of connectivity on which the functional computations had to be made. Finally, from these arguments it was clear that each of the large cells had a type of branching that allowed only one of the two possible functions to be assigned to each type on a basis of connectivity. It was then clear that one could make a general statement about excitation and inhibition in terms of the geometry which we call “Lettvin’s algorism.” It is of a familiar Euclidean type. On any dendrite neighboring excitations add, neighboring inhibitions add, excitations are divided by inhibitions next to them as one proceeds from the end of dendrites toward the trigger point. Effects on separate dendrites add at their junctions. The algorism is, of course, only a first approximation, for a distal shunt must have some effect on a neighboring proximal excitatory electromotive force, but it is better than might be expected, because dendrites are tapered, leaving authority to the proximal. Together, the geometry of dendritic fields determining connectivity and the algorism have enabled us to relate structure to function and guess the latter from the former for the first time. This made it possible for Lettvin, Maturana, et al., to try appropriate stimulus to the pigeon’s eye and so to assign proper functions to most of the many more types of ganglion cells in its retinas. These rules have yet to lead us astray.
Concerning individual neurons, one more point needs emphasizing. No matter how much the receptor fields of two neurons and their dendrites overlap, no two receive precisely the same inputs. Hence, whatever function they compute, the set of arguments for which they compute them is unique. Second, not even in the most regular structures, like the big Purkinje cells of the cerebellum, are any two dendritic trees exactly alike even when they belong to the same class of trees. Consequently, no two compute exactly the same functions of the inputs they receive. The result is that when several of them agree, or fire together, the organism can be certain that something of the given class happened there, and that it was defined by the intersection of their arguments and their functions, thus narrowing the possibilities to a much more precise certainty. Such an agreement of dissimilar witnesses without collusion is the best way for any judge or jury to determine the fact. Its importance in the reliability of computation will be clear later.
It would be entirely in keeping with the spirit of this article for us to move next into the problems of evolution and development of the central nervous system, its degeneration with disuse and its regeneration, for there is much new work on each of these topics, and it is bringing to them all the structure of protein molecules, which might be as specific as the globulins involved in immune reactions. We have in mind particularly the specific nerve growth factor of Levi-Montalcini. But none of these studies can yet begin to explain the structure of the central nervous system which we must describe to show how the circuits of neurons can possibly account for behavior. Consequently, we begin with the general anatomical scheme oversimplified for clarity of presentation.
ANATOMY OF THE CNS
Down the back of the embryo there is a strip of ectoderm which folds into a groove and then into a tube to form the central nervous system. The front end of the tube seals off where it turns down to the pituitary stalk, and its lumen shrinks to zero throughout the spinal cord.
All along its dorsolateral edge bipolar cells form dorsal ganglion with one pole in the spinal cord and the other in the body ending in sense organs.
The ventral portion of the neuraxis contains the motor neurons whose axons form the ventral roots. The circuit from dorsal roots to ventral roots is completed internally through internuncials, or interneurons. Externally it is completed by the motor or glandular structure affecting sensory endings. As soon as this begins to work, the motor neurons grow their dendritic branches. In some animals, like carnivores and primates, the bipolar cells make direct connection with the motor neurons, forming the two neuron reflex arcs, but this is the exception rather than the rule; for the rest, the internuncials are always involved.
Aside from a general tendency for motor neurons associated in activity phylogenetically to become bunched anatomically, the ventral, or motor neuron, pool has changed little in evolution. There is a general tendency for the larger bipolar cells to spin out longer ascending branches toward the head end forming the dorsal column of the spinal cord, finally reaching relays in the brainstem to the specialized thalamic nuclei relaying signals to the sensory cortex.
Near the midline, just below the old central canal, lies the core of the reticular net of interneurons. This reticular core has evolved least. The big changes have occurred in the dorsolateral area, and they are far greater at the head end, forming the great bulk of the brain. To these specialized structures we will return later, merely remarking that most of them are concerned with processing specialized sensory inputs, in large measure from receptors for smell, taste, vision, audition, and acceleration, situated in the head end of the animal.
Interneurons in the more dorsal internuncial pool of the spinal cord differentiate into layers, of which Rexed recognizes eight. At least the outer four are concerned with sensory discriminations and the gating of signals by signals in the same, or neighboring, segments or by signals from the head.
Ascending connections from these dorsolateral cells, with or without relays, connect many segments and report eventually to the thalamus and so to the cerebral cortex. A second group relays information to the cerebellum.
Lateral and ventrolateral to the reticular core, the interneurons are concerned with the coordination of motor activity. They determine the relation of flexion and extension at each joint and of joints in a limb, and of limbs in lying, standing, walking, hopping, etc. These, in turn, are organized in automatically associated movements by the enlargements of this ventrolateral reticulum in the head, called the basal ganglia. Their function is best described as the proper relation of the body to the body in posture and motion. Lesions unbalancing this system lead to the typical rigidity and tremor of the Parkinsonian syndrome, which is now yielding to proper intervention by surgical, chemical, and ultrasonic destruction to balance the system again. We may, therefore, think of these ventrolateral interneuron pools as embodying executive routines for programmed movement. The grace and skill of spontaneous activity are their regular business.
Ventral to the reticular core, and nearly surrounding the motor neuron pool, are small internuncials receiving their principal inputs from the most ventral portion of the reticular core, chiefly from the brainstem. These internuncials certainly inhibit motor neurons.
In lower forms of segmented animals each segment is so well organized that it governs itself extremely well. With the growth of the vertebrates limbs appear requiring the cooperation of several or many segments to govern the limbs, and, of course, of connections permitting coordination of fore- and hindlimbs. Yet, in Schirrager’s adult dog whose spinal cord was transected at birth, the fore end of the animal learns to stress the rear end so that it behaves properly, standing, walking, hopping, and galloping as the fore end requires.
Since Sherrington’s great work on the integrative action of the nervous system, much has been learned of the detail of the circuit action of the closed loops that make possible such actions. He was concerned primarily with those loops that pass from outside the nervous system through it, and back to the periphery. They are of two kinds, one that provides activity, regenerative loops, and the other the regulatory loops which are inverse, or negative, feedback. Recent knowledge of the detailed action of these loops has come from microelectrodes and from implanted electrodes to monitor ongoing activity in the waking animal at rest and in action. Similarly, we have much new knowledge of similar closed loops of both kinds within the central nervous system. Our chief difficulties here are of a theoretical kind, for the circuit actions are nonlinear. Even if they were linear and severally stable, combinations of two or more (and there are many) might cramp at one extreme position or break into violent oscillation. We need quantitative information as to those closed loops and, above all, an adequate theory to explain why such systems work as well as we know they normally do. As the engineers are up against the same problems where they can control the specifications of the circuits and understand them quantitatively, we may expect help from them.
Be that as it may, it is clear that the spinal cord is composed of somewhat dissimilar segments, each able to make proper adjustments to local demands, organized into larger groups for coordinated movements of limbs, quarters and all four quarters, all in conformity with information and programs from the head end. Neurophysiologists are apt to think of the functional organization at each of these levels in terms of half-centers and their interactions—say one for flexion, one for extension—one for alternation of limb movements, one for synchronization, etc., as in respiration one for inhalation and one for exhalation. The scheme works well in guiding the implantation of electrodes for recording and for stimulating, even in the case in which one site of stimulation serves as a reward and the other as punishment over neighboring structures in the brainstem. The same holds for the results of stimulation of cortical area 8, i.e., a turning of head and eyes toward the opposite side, and in area 4, for flexion and extension evoked from neighboring points.
To the layman, the most familiar of these pairs of antagonistic halves is the autonomic system, the craniosacral, parasympathetic outflow usually cholinergic in the periphery—and the other, thoracolumbar, sympathetic outflow, usually adrenergic in the periphery. Their central components appear as half-centers in brainstem and are now fairly well localized. They are situated near those central structures that control hormonal balance, sleep and waking, etc.
At the present moment perhaps the most exciting work on such antagonisms is to be found in the cerebellum where time is of the essence of action. Thanks to Bremer we have long known that stimulation near its midline, having its exit via the fastigial nucleus, produced a relaxation of antigravitational thrust relayed by the ventromedial portion of the reticular core, called the bulbo-reticular inhibitory formation; whereas stimulation farther laterally, via the nucleus interpositus, increased the thrust: so, even in such a gross sense, it had half-centers.
THE CEREBELLUM
The cerebellum begins in the salamander as a bridge of few cells situated on fibers approaching it from the detectors of acceleration in its two ears. In the frog there is a macroscopic ridge across the back of the hind brain with a rather irregular row of such cells. Obviously, they can detect temporal differences of signals from the two vestibules and enable the frog to correct for tilts and turns. In a sense, then, it is an interval clock with time along the transverse axis where place of coincidence is determined by the rate of propagation along the transverse fibers from the two sides respectively.
Given such a circuit action it is obvious that it can be used for timing as well as quantifying the response of any executive structures and their motor outputs. As complexity of executive structures increases it is only natural to expect the cerebellum to pick up more and more inputs, to increase in number of transverse paths, and to increase in width to deal with more temporal elaborations. Moreover, as most skilled acts have recently been shown to be ballistic, the cerebellum must have knowledge not only of the position and acceleration of all parts of the body, as well as of the head, but must also be informed as to programmed acts and intended destinations, for it must precompute the signals to bring to rest at the right place whatever is put in motion. In man the former information comes by way of the so-called mossy fibers to some 1010 or 1011 granular cells having a T-shaped axon, the bar of the T running transversely through the big Purkinje cells which they excite. The information as to progress, intended acts, etc., must come over the inferior olive by way of axons that climb up the Purkinje cell and its dendrites to excite them as well as branching at least to the so-called Golgi cells that can block incoming signals at the granular cells. Since granular cells excite Purkinje cells, and probably some stellate cells and basket cells which inhibit Purkinje cells, we have a way of setting both intervals and quantities of cerebellar outputs in conformity with all requirements. For the newest physiological contributions on this score we are chiefly indebted to Eccles et al., and for the corresponding electromicroscopic anatomy, to J. Szentagothai et al. The most exciting aspect of these papers together is this, that there are two types of synaptic terminations visible, one with many neurofibrils and few smallish synaptic vesicles, the other larger, without neurofibrils and many vesicles, and that where identifiable, the former were excitatory, the latter, inhibitory. If this holds up in general we may have another aspect of neural nets relating structure to function.
Finally, the cerebellum, being an interval clock, should serve as an autocorrelator of signals and hence be an ideal device for bringing signals up out o£ noise, and so of great value in navigation. This would account for its preponderance over the rest of the brain in certain birds and, above all, in weak electric fish who can detect a one-millimeter glass rod at a distance of one meter in brackish water. Thus the cerebellum is a good example of a structure having, as it were, an executive half-center type of action timewise distributed in space that at that very time is directly useful in detection of the signals necessary for its own use.
The second bridge comparing signals from the two ears is concerned with hearing. It contains cells that fire in response to impulses from either side but not from both if they arrive simultaneously. Because the delays in reaching a given cell depend upon conduction time in their afferents and because the cells are so distributed as to segregate the delays spatially, we are able to detect the direction whence a sound comes by the phase shift due to the distance between our ears. Professor van Soest has told us that the best detection he found was by van der Pol, who could spot a shift of right to left and vice versa when the time difference was one microsecond. The stimulus was a click which has the great advantage of simultaneous attack of many frequencies and so stimulates at once detectors of many frequencies in the ear. Yet even beeps at frequencies of a thousand cycles per second were often sufficient for many people to report change of origin of sound when the phase difference is five or six microseconds. Even this is many times better analysis of intervals than can be expected of any single unit like a real neuron, for under the most carefully controlled conditions its firing and its period of latent addition fluctuate some thirty or more microseconds. A judgment good to one or even half-a-dozen microseconds requires the cooperation of many neurons. Because these neurons compute an exclusive or between right and left sides, they enable us to hear a signal in either ear alone despite the same much greater noise in both.
In these structures, as in the cerebellum, there are terminations of single axons which seem to be able to excite at one point and inhibit at another either on the same cell or on different cells, and we expect with intense interest a detailed anatomy with electron-microscopic pictures.
When we look at the visual system, with its input to the superior colliculus from the eyes so split that one colliculus sees what is in one side of the visual field of both eyes, and its motor nuclei, the third, fourth, and sixth, clearly organized as half-centers, we have the clearest example of the tie between the local input and the executive organization.
In fish, bird, and frog the superior colliculi constitute the principal cortical structure for analysis of vision. In mammals the cerebral cortex preponderates. In the monkey without visual cortex, the colliculus still allows him to respond to total luminous flux. But a man is stone-blind though he still has ocular afferents to his colliculi, and they still produce eye movements.
So much for the executive system, composed of half-centers with their proper reflexive circuits to adjust them to local conditions. It is ready and competent to do what is required of it.
During the past twenty years it has become increasingly clear that it is the reticular core of the nervous system that connects the whole organism to one of less than twenty incompatible overall modes of behavior—say, eat, drink, copulate, fight, fly, wake, sleep, etc. This unevolved core, out of which all other truly central structures of the nervous system have evolved to do its bidding, has outputs that direct and tune and set the filters on all inputs. It is this structure that decides which way to look and, having looked, what to heed. It controls the thalamic relays to the cerebral cortex and even the cortex itself. We know its anatomy and the responses of its neurons to many inputs. Even its so-called spontaneous activity in waking and sleeping animals has been recorded. Throughout the brainstem and spinal cord its structure is surprisingly uniform. Its cells are large, with dendrites that fan out at right angles to the neuraxis, sampling incoming, ascending, and descending signals, and with an axon that bifurcates, running up and down the neuraxis. Usually it has a recurrent collateral affecting it and its neighbors. The termination of the branches of its axon often reach so many other reticular neurons that a hundred properly chosen might affect all the rest. The sampling of inputs by neighboring cells often is very different, and each is widely scattered. A single cell may respond to clicks, flashes of light, and touch of skin of the right foot. The sampling may change from time to time, say from waking to sleeping. Thus it is a superb example of cells having well-nigh the greatest possible variety of inputs so that all its information is distributed. Characteristically, its cells rapidly cease to respond to any repeated insignificant input. Multiple electrodes disclose activity wheeling around among its cells. Probably this is part of its distributed memory by reverberation that may account for its ability to be conditioned. We know the system enjoys a redundancy of potential command in which information constitutes authority; for, whichever scattered members of its neuronal pool get the crucial information are capable of sweeping the whole system into essential agreement on one course of conduct committing the whole organism. In the most urgent cases the system cannot reverse its decision in less than about a third of a second, which means that agreement can be achieved in, say, a hundred interaction times—the minimum time for every one neuron to talk to almost all others.
For some years we and our collaborators have attempted to formulate any theory of the reticular circuit action to account for its known performance. The first model that comes to mind is one of nonlinear oscillators properly coupled, but for this the mathematics is still to be sought. The second model is an iterated net of computers, but here again, any question we would like to put turns out to be recursively insoluble. We had, therefore, to begin by combinatorial means, only to find that no small model so manageable could have the properties we needed. To enlarge the model, Kilmer and Blum first turned to computer simulation of its logical aspect, and that is working and was presented by Kilmer and Blum in Dayton at the 1966 Bionics Symposium. Next we shall introduce proper delays to permit the so-called time binding necessary for conditioning.
The logic of the reticular core is abductive, the apagog, or reduction, of Aristotle (GBWW, Vol. 8, p. 91a), which starts with rules and a fact and guesses that it is a case under a particular rule. It is like the differential diagnosis of medicine.
No logic of the executive system is deductive—which starts with a case of a given rule and concludes with a fact, namely the proper action. It is like a surgical operation.
The logic of the forebrain, to which we turn next, is primarily inductive, which from facts and cases invents rules, or, if you will, takes habits, and comes to know universals.
THE CORTEX
The oldest parts of the forebrain, excluding the basal ganglia, have their primary sensory input from the nose, but all parts of the body report to it through the reticular formation. Since signals from our internal organs make up much of its input, we would expect the old cortex to be primarily concerned with affects, feelings and emotions, and with what we heed and remember. We mentioned the hippocampus, its role in memory. For brevity we will call this older brain, from its position and shape, the limbic system.
Due to the vast development of the new cortex the limbic system is completely buried and, therefore, not as available for study. Moreover, it is easy to make out the principal thalamic nuclei and their projections to the new cortex, sic: vision, by way of the lateral geniculate to the area striata of the occipital pole; sound, by the medial geniculate to the koniocortex of the superior temporal plane; somesthesia to the post rolandic gyrus; taste and touch of mouth, tongue, etc., to the cortex buried in the Sylvian fissure. And, never to be forgotten, the reticular core, by way of the anterodorsal nucleus to the frontal pole. It is this latter connection whose severance, while it does save the anankhast from his compulsions and anxiety, destroys in large measure his judgment; for the frontal pole is the computer of long-range strategy employed by the reticular core. As Nauta has recently shown, the principal cortical connections of the frontal poles are with each other and with the limbic system.
Each principal sensory receiving area has an adjacent area whose excitation moves the organ of that sense. With each there is an association area. Both the adjacent and association areas receive their chief cortical area, and both are richly connected with the corresponding area on the opposite side, whereas the primary areas are not.
Small lesions in primary receiving areas produce a focal loss of hearing, seeing, or feeling, of which the patient is never directly aware any more than he is of his blind spot, but they can be mapped by thorough examination. Small lesions in adjacent and associational cortices can rarely be detected even by our most painstaking procedures. This is probably what misled the great Lashley into his notion of the equipotentiality of these cortical areas.
To understand the problem we must delve deeper into the structure and circuit action of the cortex, as we promised. Even before that we must point out that our notions of projections to primary areas are somewhat suspect; for, first, what we took to be the principal information carrying signal, because it was the fastest and mapped topologically on the cortex, may still be an alerting signal carrying only the address of the origin of significant signals, and, second, every part of the cortex has so-called nonspecific afferents from the reticular system that may well not merely signal for attention but actually inform the cortex anywhere of the nature of the process at the location signaled over the fastest track. This suspicion arose from and has its factual backing now in the work of our collaborator, P. D. Wall, at least in Lashley’s pet animal—the rat.
Bearing all of this in mind, we may describe the local circuit action of the cortex in terms of small cylinders or columns to the center of which the specific afferents come and then divide, spreading horizontally in rather well-defined patterns according to the particular cortical area we happen to consider, so that each of these axons affects a set of neighboring columns. Each column is, say, one hundred neurons in depth, and the activity within it is such that excitation above is combined with inhibition below and vice versa. Lateral connection, perhaps dendrito-dendritic, spreads the effects to neighboring columns in the upper layers, and lateral dendrites of large cells in lower layers pick up effects from adjacent columns. Cells in the lowest layer have ascending axons reaching high into the upper portion of the column. In each column there are a few cells whose axons dip down below the cortex and reenter it elsewhere, forming the U-fiber system, the long subcortical associational systems, and the corpus callosum and anterior commissure—the latter two relating the two hemispheres. No two columns have exactly the same inputs, the same interconnectors or neurons that compute the same functions. The final output of every column comes from its largest cell whose axon ends usually in the reticular formation, and, even when it ends elsewhere, it is apt to send a collateral to the reticular formation, and, of these, a few send collaterals to the basal ganglia to inform the programmed activity of doings in the motor cortex related to somatic afferents.
From this it is obvious that every function is distributed—not in one column of cortex but in many—and differently from moment to moment. Hence, while in principal receptive areas, to knock out a few hundred adjacent columns will leave a blind spot, one could not expect a similar result with equal damage in any adjacent or associative cortex. In fact, it would be almost inconceivable that one could detect the damage by any test. That does not mean that there would be no loss of function, for there certainly is a specific loss, but simply that the chance of finding the right test is of measure zero among all possible tests, even of the right kind. Scattered loss of individual cells or even of individual columns would certainly be undiscoverable, and probably insignificant.
Winograd and Cowan, in their Reliable Computation in the Presence of Noise (M.I.T. Press) employ such a strategy. Their computations go through successive ranks of neurons, each rank decoding, computing, and recoding for the next rank. With all components behaving erratically, but by increasing properly the richness of connectivity and the variety of the functions computed by particular neurons on such inputs as they locally receive, they are able to show that the errors need not increase from rank to rank. The mathematical theory turns out to be for computation equivalent to Shannon’s famous information capacity theorem for transmission and the proper analogue for the Ising model in many dimensions. What follows from this is that to ask the function of a particular neuron, or column, in any associational cortex is like asking the function of the nth letter in every mth word in the English language. You can probably read an English sentence with little difficulty if you knock out the second letter in every seventh word, and you will certainly have no trouble if you knock out the ninth letter from every word that has a ninth. For anyone interested in this at the highest level of poetry we would heartily recommend Empson’s Seven Kinds of Ambiguity, for it has also the answer to the problem of common shifts of threshold under which every neuron computes a different function of its inputs but the input-output function remains the same, as it does in the respiratory system under a general anesthetic.
Anatomically, the price of reliability includes a richness, supporting diversity, in connectivity. What strikes every cortical anatomist in this, is that he can at once, and without knowing the magnification, tell the same area—say the motor cortex—of a monkey from that of a chimpanzee, or of a chimpanzee from that of a man, by the ratio of cell bodies to total volume. The better a brain, the greater the ratio of space for connection to the space for cell bodies. The better a brain, the more reliable and the less the possibility of assigning a function to a particular neuron.
Many nets can be resolved into scries and parallel connections, but it is characteristic of any net embodying perceiving, feeling, and thinking that it must mix streams of information in as many ways as are necessary. Such nets cannot be analyzed into series and parallel paths any more than an ordinary Wheatstone bridge.
When we compare the brains of monkeys with chimpanzees and theirs with man’s, we note two changes: (1) a great enlargement of the frontal pole—we have more foresight—and (2) a greater enlargement of an area at the upper end of the Sylvian fissure—we talk. The structures crucial for speech are there in us and not in other primates. Phylogenetically it is the last part to appear; ontogenetically, the last to mature. Structurally it is an associational cortex receiving its inputs from the associational cortices for somatic, auditory, and visual functions, but these connections are not developed in the first couple of years. They become fully operational only when they are fully myelinated, which may take a dozen years. We do not want to be too precise in such statements for, as everyone knows, some children walk before they talk, and some talk before they walk. The point we want to make is that until these associational inputs become somewhat operative, the child is like the monkey or the chimpanzee. All three can be conditioned to a visual, auditory, or tactile cue preceding some effective stimulus, like candy, some reward that has its path through the limbic system. They can discriminate between a seen square and a seen circle, and between a felt square and a felt circle, but they have no carry-over from one to the other for square or circle. They lack the connection to the common meeting ground at the end of the Sylvian fissure. The same is true for sounds. Frequently parents tell us that a child is color-blind, for while he has all the color names he uses them at random. The answer is: he is too young, probably not yet three years old. Wait a bit and he will suddenly have them all correct.
Norman Geschwind, in his “Disconnexion Syndromes in Animals and Man,” has made the point correctly. There are things in our world that are the same thing for us whether they be heard, seen, or felt, because we have a cortex wherein these associations of associations can be made- monkeys and chimpanzees and babies do not have it. There is for them, therefore, no possibility of associating the spoken and heard word with the thing they do not conceive. For this area embodies the sensorium commune of the talking animal called man.
Geschwind has made this very clear by studying the effects of disconnections producing agnosias, apraxias, and aphasias, and has so shown us the anatomical basis of language. In many of his cases, and especially in cases in which the corpus callosum and anterior commissure are cut to prevent spread by convulsions from one side to the other, one simply cannot speak of the man as a whole. In the right-handed man it is the left hemisphere that talks, reads, and writes. The right hemisphere is nonverbal but may be capable of performing visuo-spatial tasks better than the left hemisphere. So, even when the whole man is talking or writing, we are hearing and reading what the dominant hemisphere has to say and only hopefully what it knows from the nondominant hemisphere. If consciousness be conceived as an agreement of witnesses, and this be taken in a verbal sense, then the minor hemisphere cannot be said to be conscious, though by other means we know it knows for it acts accordingly. Thus human speech at its best falls far short of the communion we experience with our fellowman.
DIALOGUE
For this rich communication when we come face to face, we use the term “dialogue,” including in its interchanges our words, but not them only.
Let us begin with words spoken, four syllables to a heartbeat, and for simplicity imagine the sun is shining and I say to you, “The sun is shining.” The sun shining just is. Peirce says it has firstness. The sound of my words has firstness, about one phoneme in 10-1 sec., but my proposition “The sun is shining” is true or false of the world of fact, so it has secondness. Finally, as the Stoics said, for the proposition to be proposed I must have had something in my head like a fist in my hand. This they called the “lekton.” If you, to whom I made the proposition, grasp it, then there is a lekton in your head, like a fist in your hand. Thanks to Geschwind, we can locate and anatomize the lektons in both heads. The lektons have thirdness, for they recognize the intention of the proposition as a noise bespeaking the shining sun. Thus four things—the noise, the sunshine, and the two lektons—properly related, constitute the minimum structures of the dialogue. It consists of two triadic intentional relations with the two colligative terms, namely the noise and the sunshine in common in our public worlds, and the private lektons in our heads severally, and shifting ten times a second.
Unfortunately we have no logical calculus of intentions and obviously none of intentional relations. That is possibly one of the reasons why we, in psychology and psychiatry, often talk such tedious nonsense.
No factual statement today concerning tomorrow is true or false today. If I say to you, “It will rain tomorrow,” then I do not have to tell you that it is I who think or hope or fear or believe it, for you know it is I who said it, and you will take it properly that I have made such a judgment. But, as Gotthard Gunther properly points out, for me to make that judgment requires some self-referential system in me that distinguishes between facts for the nonce and expectations for the future. No factual statement today concerning tomorrow is true or false today. If I say to you, “It will rain tomorrow,” then I do not have to tell you that it is I who think or hope or fear or believe it, for you know it is I who said it, and you will take it properly that I have made such a judgment. But, as Gotthard Gunther properly points out, for me to make that judgment requires some self-referential system in me that distinguishes between facts for the nonce and expectations for the future. This is necessary when comparing fact with fancy, reality with dreams, earnest with jest, and fighting with playing. This lies at the core of distinguishing every intentional act from the thing intended by the act. Moreover, Gunther has insisted that any logic adequate for human speech must have ontological room for the necessary distinctions, and that the introduction of this new axis always doubles the number of places required. For mere fact, the real axis, one suffices; for the statement, two values, true and false; for the lekton, four; and, for dialogue, eight. This has its parallel in the mathematical presuppositions for physics, one real, two in the Argand, four in the Hamiltonians and eight in Cayley numbers needed for the theory of relativity, and the corresponding logic is no longer even modular, ~(a~b) = (a^b). It is, after all, the beginning of a relational logic of becoming and may serve as a kind of model for living dialogue.
Our second theoretical difficulty arises from another source. Thanks largely to Noam Chomsky, the analysis of context-free, phrase-structured language has been reduced to an exercise in group theory; but no natural language is ever context-free, even when it is written carefully, and in real dialogue the context often carries most of the information. One has only to tape-record the dialogue to discover that a large fraction of the sentences are never finished, nor need be, for the meaning has already been transmitted.
The logician’s definition of the meaning of a sentence is clearly useless in this case, for it is defined as the class of all those sentences that are true, or else false, under the same circumstances, whereas Donald MacKay’s definition fits the facts. Remembering the hortatory aspect of our utterances—that we never speak except to affect the listener—he defines the meaning of a signal as the selective function it exercises upon the transitional probabilities of the overt or covert behavior of the recipient. This goes for all the rapid cues we enjoy in dialogue.
MacKay has pointed out that in dialogue, as opposed to exhortation, instruction, or competitive monologue, each exposes as best he can his model, or map, of the world as he sees it, with the hope that his partner will fill in the gaps or correct the mistakes, which his partner does by exposing his own map of that part of his world. Hence much of the communication is concerned with finding the proper parts of the maps and with the interpretation of one another’s symbols of description. In this many of the cues flashed are not words, but nods, shakes, shrugs, and hand waving. In our culture we demand a clearer statement or more explanation by clamping our jaws and frowning: we acquiesce with a smile; we question the validity of statements by lifting an eyebrow or tilting the head, and the speaker receives it while he is speaking. And the flickering combinations of communication or expressive movements of face, hands, and voice speak in a time which joins experiencing for further common symbol forming.
Gregory Bateson has shown us that dialogue is not really concerned with its official, or verbal, content but is aimed at settling the social questions of our relations to one another, and once our attention is called to this, it becomes obvious. When boy meets girl, the content is often a mere ornament to the substance of the dialogue. The effective ingredient, carried by gesture and inflection, he calls “meta communication,” without which mere words may go astray.
Birdwhistell has been even more concerned with the changes in pitch or accent necessary for an understanding of our otherwise ambiguous utterances, and he has studied the bodily gestures that serve the same ends. Here the meta-language is concerned with the content. Finally, dialogue is not a simple alternation of active speaking and passive listening turn by turn. Both partners are continuously observing and sending many cues. It is a closed loop of many anastomotic branches through which there runs at a split-second pace an ever-changing symphony and pageant relating man to man ever more richly.
Water with water, macromolecule with macromolecule, cell with cell, and man with man, we see the dance of life as James Clerk Maxwell imagined it—good physics, and getting better, good logic, and getting better, and, once again, a good natural science ontologically rich enough to embrace human dialogue.
REFERENCES
Berendsen, H. J. C. “An N.M.R. Study of Collagen Hydration” (Nuclear Magnetic Resonance), Druk: V.R.B.—Kleine Der A, Groningen, 1962.
Craik, K. J. W. The Nature of Explanation. London: Cambridge University Press, 1952.
⸻. The Nature of Psychology. London: Cambridge University Press, 1966.
Davidson, P. F., and Schmitt, F. O. (eds.). “Brain and Nerve Protein: Functional Correlates,” Neurosciences Research Program Bulletin (Boston), Vol. Ill, No. 6 (December 31, 1965), pp. 1-55.
Geschwind, N. “Disconnexion Syndromes in Animals and Man,” Brain (London), Vol. LXXXVIII (June, 1965), Parts II and III, pp. 237-94 and 585-644.
Goodwin, B. Temporal Organization in Cells. New York: Academic Press, Inc., 1963.
Hamori, J., and Szentagothai, J. “Identification Under the Electron Microscope of Climbing Fibers and Their Contacts,” Experimental Brain Research (Springer-Verlag, Berlin, Heidelberg, New York), Vol. 1, Fasc 1 (1966), pp. 64-81.
Kavanau, J. L. Structure and Function in Biological Membranes, Vols. I and II. San Francisco: Holden-Day, Inc., 1965.
Kluver, H., and Bucy, P. C. “Preliminary Analysis of Functions of the Temporal Lobes in Monkeys,” Archives of Neurology and Psychiatry (Chicago), Vol. XLII (1939), pp. 979-1000.
MacKay, D. “Towards an Information Flow Model of Human Behavior,” British Journal of Psychology, Vol. XLVII, Part I (February, 1956).
McCulloch, W. S., and Pitts, W. H. “A Logical Calculus of the Ideas Immanent in Nervous Activity,” Bulletin of Mathematical Biophysics (Chicago), Vol. V (1943), pp. 115-33.
McCulloch, W. S. Embodiments of Mind. Cambridge, Mass.: M.I.T. Press, 1965.
Rosenblueth, A., Wiener, N., and Bigelow, J. “Behavior, Purpose and Teleology,” Philosophy of Science, Vol. X, No. 1 (January, 1943), pp. 18-24.
Wallace, A. R. Contributions to the Theory of Natural Selection. London: The Macmillan Company,
1870.
Whipple, H. E. (ed.). “Forms of Water in Biological Systems,” Annals of the New York Academy of Sciences, Vol. CXXV (October 13, 1965), pp. 249-772.
NOTE TO THE READER
Drs. McCulloch and Brodey themselves indicate the background information that is relevant to the recent developments in biology that they describe. Scientific biology began with the researches and writing of Aristotle and the Hippocratic school of medicine; their major works are reprinted in Vols. 8-10 of Great Books of the Western World. Descartes, in Discourse on the Method, advanced the hypothesis that animals can be considered as automata and explained by the laws of physics (Vol. 31). Darwin provided one of the few general theories of biology capable of unifying many diverse fields of knowledge (Vol. 49).
The Syntopicon provides a guide to discussion in the Great Books on the particular topics considered by Drs. McCulloch and Brodey.
Life and Death 1 and 2 cite passages bearing on the nature and cause of life and the continuity or discontinuity between living and non-living things.
The authors discuss the latest empirical theory of sensation and perception. On this subject the whole of chapter 84 on Sense is relevant, particularly the passages cited under the various topics under Sense 3, on the analysis of the power of sense.
On memory, learning, and the perception of time, the reader will find much of interest in chapter 56 on Memory and Imagination.
Dr. McCulloch, as an “empirical epistemologist,” would willingly accept comparison of his theory with other theories of knowledge. Chapter 43, on Knowledge, provides the materials for such a comparison.