(Reprinted from Telicom XI, 33: 50, Nov/Dec, 1995, )
In The Astonishing Hypothesis, (see ibid, October, 1994) Francis Crick described the modern hypothesis in neuroscience, which is actually the ancient epiphenomenon hypothesis: that consciousness is a mere byproduct of neuronal activity.
In my critique of this view (ibid, February, 1995), I criticized the thinking within reductionist neuroscience ("ethnic science") as empirically void and logically incoherent, failures that preclude neuroscience from addressing consciousness at all. For example, the primary implication of the epiphenomenon hypothesis is empirically and logically absurd: The consciousness is a non-entity that is aware of its non-existence.
Absent materialist constructions for consciousness, the question remains, "What constitutes consciousness?" Traditional scientific methods cannot address this question for many reasons, including these: First, the mysterious entity that we identify as our "selves" or "souls" is singularly subjective; it cannot be studied objectively. Second, the "problem of other minds" prevents experiments on the consciousness because researchers can't observe the consciousness of any other person. And third, properties of the consciousness (e.g., telepathy, etc.) do not conform to the known laws of physics; that is, such properties cannot be described in terms or energy and mass.
It is intellectually dishonest to wave off the self as a non-entity because it does not conform to ethnic scientific beliefs. Instead, we must find ways to conform scientific thinking to apply to observed data concerning consciousness. To this end, I posit that consciousness consists of enformy--the organizing principle (see "Enformy: The Capacity to Organize," in Thinking on the Edge, p. 181-189).
The enformy theory implies many surprising predictions, including this: Every enformed entity in the universe is potentially self-aware. This implication is superficially consistent with the speculation Richard attributed to Rucker, but two terms qualify the differences: enformed and potentially. Because it is enformed, for example, a photon possesses consciousness. Yet, I assert that a photon cannot be self-aware because it lacks the capacity for abstract thought--a necessary mental attribute for forming the concept of the first person: "I." In other words, because the photon cannot form the concept of its own existence, it does not know it exists. Perhaps Descarte would say, "It doesn't think, therefore it doesn't know it is."
Ultimately, Richard addresses the question of whether computers can be self-aware, noting that proponents of strong Artificial Intelligence (AI) predict the arrival of such devices within a few decades. But there's a joke hidden in these speculative fanfares to the fantastic: Since we can't know whether other humans are self-aware, how would we know whether non-biological machines are imbued with consciousness?
The inability to observe self-awareness in others requires tests from which we can form inferences, but such tests are inherently weak. We suppose other humans are self-aware, for example, because they can invent tools, create music, poetry, and graphic art, perform scientific and philosophic work, use the first person, and communicate existential statements and questions similar to our own. Theoretically, computers can be externally programmed to emulate these behaviors. Yet, even if they can, they are not necessarily self-aware.
The best test for self-awareness I know is the suicide test: Only an entity that knows it exists can consider its nonexistence. Therefore, only a self-aware entity can contemplate or commit suicide. By the suicide test, we can infer that at least a few humans are--or were--self-aware. Whales that beach themselves might be self-aware. But the suicide test does not apply to worker ants, which kill themselves to protect their colony from predators: Their self-destructive behavior is genetically programmed.
In short, despite their boundless--and groundless--optimism, members of the AI community must confront the same limits and challenges that are faced by the denizens of the world of NI (Natural Intelligence). They can't make credible claims regarding conscious machines until they have mastered the problem of other minds.
Or until one of their machines autonomously commits suicide.