I agree with the following.
Consciousness is ‘what we value as “aware”.’ [Reece101 #1]
‘consciousness depends . . . on how many aspects one is aware of and the extent of that awareness’ [Critical-Tim #136].
I disagree with the following.
‘Consciousness is not coextensive with brain, it exists independently of material brain . . . so it is an ontologically novel entity. It exists independently of the physical materials and properties of its parts . . . It is not a “process,” nor is it a set of “functions.”, it is the conceptual space within which we find the objects of thought.’ [Sidewalker #54]
Sidewalker (#129) also talks about “the hard problem” and “qualitative experiences” as determining consciousness. The following is why I think that awareness is the key to consciousness.
Let’s say that a certain AI is a combination of hardware and software. This AI would have feedback systems specific to its physical constitution such that it would “know” when it is retrieving from its storage (remembering), when it is computing (thinking), and when it is outputting to an interfacial device (communicating), and it would be able to distinguish between them. The feedback response would be appropriate to each type of occurrence, such that the AI would be aware of which type(s) was occurring at a particular moment (awareness).
The AI would “know” that it is an entity separate from its environment, upon which it depends (self-awareness). It could compute and compare probabilities of potential actions to achieve desired results (imagination). It could be constructed with a built-in desire to stay “alive” (i.e. switched on and connected to a power source).
It is obvious that the awareness of such an AI would be qualitatively different from that of a human, but the result would be the same. It could say, plausibly, “I experience myself retrieving (or computing, or outputting).” So, what if the AI’s experience is qualitatively different from that of a human? That difference is a only a result of the physical differences between their respective embodiments. Why make that difference the determinant of consciousness?
If one experiences a computation within a biological brain, one is conscious. If one experiences a computation within an electronic brain, one is not conscious. Does that really make sense?
Separating consciousness from a material brain leads to many unanswerable questions, such as, Why are people even unconscious at all sometimes? What happens to consciousness when a person is unconscious? What is it that connects and disconnects a body and its consciousness? If consciousness is “an ontologically novel entity” how do you describe it? How can you have knowledge of it? Why does it appear to be dependent on a brain?
If you want to argue that the existence of consciousness is derivable from physical laws, then there needs be some kind of an explanatory physical theory that relates a causal sequence that takes us from physical processes to consciousness. The problem with physicalism isn’t that it presents a flawed physical theory of consciousness; it is that it provides no theory of consciousness to work with at all. Physicalism’s adherents attempt to fill the explanatory gap with different variations of the word “emergence”, but the word “emergent” is descriptive rather than an explanatory, at best it merely disguises the fact that correlation is not causation.
.
The problem with physicalism isn’t that it presents a flawed physical theory of consciousness; it is that it provides no theory of consciousness to work with at all. Physicalism is an unwarranted ontological commitment which buys us nothing in the way of insight or explanation regarding consciousness.
The self-evident experiential reality of consciousness is undeniable, physicalism is based on denial of that direct and immediate experiential evidence based on an unfounded, and a priori belief that reality is exhaustively constituted by physicality. From the complete lack of evidence to the contrary, and the absence of even a speculative explanatory theory, it logically follows that consciousness transcends the boundaries of a purely physical system, and consequently, constitutes an ontologically distinct entity.
In the end, the only consciousness we can have direct knowledge of is our own, at best we must presume the existence of consciousness in others, if “awareness” in others cannot be observed and is only presumed, then it is at best “philosophically theoretical”, but scientifically speaking, it is an inadequate measure of consciousness. What we can observe is responsiveness to the environment, adaptation to circumstances, and other types of behavioral indicators from which we can impute consciousness.
If you remove philosophical and metaphysical considerations and the preconceived notion that a brain is required for consciousness, which is to say approach the subject strictly scientifically, you need to define consciousness observationally as involving the ability to perceive sensory stimuli and respond by purposeful movement or by a behavioral change. Once this is done, a wide range of creatures without brains demonstrate rudimentary forms of consciousness and examining those capabilities in an evolutionary context makes it very hard to draw arbitrary lines, especially at “brain”.
With a more realistic yardstick which lends itself to observational evidence, we can look with an open mind at the whole of life as it appears to us today, including the evidence contained in the evolutionary path taken by life to arrive here today. Seen in its entirety, seen the way evolution demands that we see it; there is a direction to life, a temporal progression towards greater complexity and higher forms of sentience, from inanimate matter, to life, to thought, to self-reflective consciousness.
Single celled organisms with nothing even resembling a rudimentary brain or nervous system show themselves to be sensate beings with complex behavior. I think we can agree that bacteria are prokaryotes without brain or nervous system, and there are plenty of studies of bacterial that allow us to extrapolate from behavior to a presumed internal cause of that behavior that have to be attributed to a rudimentary form of "mental activity".
Bacteria can respond to a broad range of stimuli, demonstrate elementary forms of “memory”, and engage in purposeful activities. They have shown themselves to be extraordinarily perceptive, demonstrating elaborate behavioral responses and adaptations to a wide range of attractants and repellants and other environmental stimuli such as light. They have complex signaling capabilities, show the ability to communicate, and change their behavior based on population size, which implies some kind of quorum sensing ability and clearly demonstrates social behavior on at least a rudimentary level. They have been proven to have some form of memory and a rudimentary ability to learn, and the discriminatory ability to “choose” among alternatives, regarding among other things, gene expression. They clearly integrate these capabilities into a self-organized and sensate being that in at least an extremely attenuated way is perceiving, discriminating, remembering, and even “thinking”, on some level it is conscious.
Primitive invertebrates like the annelid worm are observed to show maze learning, classical conditioning, and habituation. A wide range of creatures without brains show purposeful behavior indicating that they are sensate beings that not only “feel” things in their environment, but also “intelligently” respond to sensory inputs. This progression goes on and on, progressing by degrees, culminating in man.
There is ab abundance of empirical evidence from which to conclude that life has always been “self-transcending”, constantly coming together and then aggregating into higher forms of self-organizing wholes which become individual "selves", always with constantly increasing degrees of sentience, awareness, and consciousness.
In an evolutionary context, over time prokaryotes developed into eukaryotes through symbiotic assimilation to become independent “selves” with an extremely attenuated form of sentience, and then colonies of eukaryotes developed into metazoans through symbiotic assimilation to become independent sentient “selves”, we observe this tendency at work in social insects building hives and mounds and we see it in human beings building societies and civilizations.
Over the large-scale dimensions that are required by the study of evolution we can unmistakably apprehend a tendency for life to assemble into self-organizing wholes that exhibit the coherent behavior and underlying principles of consciousness that in the earliest stages of the development of life are not associated with possession of a brain. The evolutionary record demonstrates a continual rise in degrees of sentience that culminated in brains and conscious human beings, but temporally speaking, the evidence does not support the presumption of a cause and effect between brain and consciousness.