The essence of animate life is movement, both interior and exterior. The movement of animals exposes them to a range of objects and prompts an intimate interaction with the physical world. Through this interaction, each animal learns about its environment, making distinctions between what’s favorable or unfavorable. The more intelligent the animal, the more it can learn; in the case of primates, endowed with both endless curiosity and superb digital dexterity, learned behaviors are quite complex, particularly symbolic language used by human beings.
It seems reasonable to assert that the capacity for self-consciousness emerged coincident with symbolic language; thinking about oneself requires a language of thought. Perhaps the first inkling of language occurred when a stone made a plopping sound when dropped on wet ground, a sound that a human could mimic with his mouth which later was used to indicate “dropped stone on wet ground.” The idea that the first human languages were symbolic mimicry of natural sounds is well-supported by existing languages that retain whistles, clicks, and knocks. An early thinker might have named himself with a bird call or the sound of crickets.
There were once many thousands of languages on earth; linguist John McWhorter has documented how drastically that number has shrunk and continues to get smaller. For those of us who speak English, or for that matter any western language, understanding a description of the world through whistled ideas seems impossible. It is possible, however, because through the combination of self-consciousness and symbolic language, people made the leap to complex, imaginative abstract thought. The ability to construct and remember a self-narrative is the basis of subjective reality; to be able to transmit that narrative to others creates intersubjective reality. Objective reality remains the world of real objects, the physical environment within which all subjective reality operates.
Abstract human thought, although seemingly dependent upon biological structures, is in itself metaphysical, which is to say a thought is not a physical object. The mechanism of thought with which we build and retain narrative is at present too complex for replication, although computer scientists hope that Artificial Intelligence will someday mimic human thought. That doesn’t sound like a particularly good idea to me.
At heart, humans are fumbling primates; we stumble along – dexterously grabbing this, randomly discarding that – all the while figuring things out on the fly as we move about in the physical world. When it comes to knowledge, everybody begins at zero and builds from there. We’re just as bumbling when it comes to dealing with the metaphysical world, using abstract thought to come up with all kinds of brilliant and crazy ideas about ourselves, each other and the universe. Combined with our agile fingers, our prodigious imagination can result in great accomplishment but often produces terrible results.
Suppose a computer becomes self-conscious as a result of great physio/electrical complexity analogous to that of a human brain. It will learn all we’ve taught ourselves, but we can see where that’s gotten us: to the brink of extinction. How, I wonder, would a self-conscious Artificial Intelligence see itself and the world?
Olaf Stapledon posed a similar question in his 1944 book Sirius, about a dog of that name that attains self-consciousness and human-level intelligence. Coping with his animal self while thinking abstractly creates an untenable situation, precisely the situation in which we find ourselves today.