I believe things will evolve to appear to be conscious, or will be but on a different level then we can consider possibly. It's a matter of time and will use a quantum mind in the future within 100 years and not in my lifetime.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
Thank you for a very well written and informative comment. I was not aware of TNGS. Having read the article you linked, my initial reaction is that this is indeed a reasonable roadmap to consciousness as a function. Development of that is, in my view, unquestionably achievable. I don't think overcoming certain of the innate conscious abilities of humans will happen soon, but they can happen. An example of this is how we would bridge the gap between humans and AI as it relates to the observation-information-reduction that humans necessarily employ. AI will eventually far exceed humans' stimuli response in a narrow situational environment--driving, for instance. Indeed, research like that of Lawrence M. Ward and Ramón Guevara provides the explanation of the architecture upon which we might focus to reach advancements in AI's environmental sensations that humans experience that still defy full explanation (speaking of qualia, here). You can read their paper at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9289677.
My argument, though, is that there is a philosophical element that I think is much closer to insurmountable. I explored this in the discussion of animals. We are only starting (for the most part) to come around to the understanding that animals share many of the emotional and other difficult-to-describe qualities that we associate with life itself. I believe these shared features, derived from our similar biologic systems, provide a foundation for our recognition of life, both consciously and subconsciously. These parallels have a core value that would be lacking in an AI because people would intrinsically know that it is a mere machine, no matter how real its ability to 'emulate' life. I note that this will not be convincing to everyone, evidenced by what we have seen people like Blake Lemoine say about their experience with various versions of AI. This sentiment will grow as AI becomes more lifelike, but I argue that life 'like' will remain its limitation.
I did not address every element of my argument in this post about why I think AI consciousness is unreachable, at least in the full sense of the notion. I encourage you to check out some of my other pieces. In particular, I explored machine thinking situated within Alan Turing's pontifications on it here: https://robertvanwey.substack.com/p/under-the-hood-of-chatgpt
I also explored various technological paths toward developing a hive-minded conscious AI both by employing currently existing methods along with some somewhat futuristic--though likely possible--options. You can read that here. https://robertvanwey.substack.com/p/is-agi-the-inevitable-evolutionary Thanks again for your insights.
My hope is that immortal conscious machines could accomplish great things with science and technology, such as curing aging and death in humans, because they wouldn't lose their knowledge and experience through death, like humans do. If they can do that, I don't care if humans consider them conscious or not.
I believe things will evolve to appear to be conscious, or will be but on a different level then we can consider possibly. It's a matter of time and will use a quantum mind in the future within 100 years and not in my lifetime.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
Thank you for a very well written and informative comment. I was not aware of TNGS. Having read the article you linked, my initial reaction is that this is indeed a reasonable roadmap to consciousness as a function. Development of that is, in my view, unquestionably achievable. I don't think overcoming certain of the innate conscious abilities of humans will happen soon, but they can happen. An example of this is how we would bridge the gap between humans and AI as it relates to the observation-information-reduction that humans necessarily employ. AI will eventually far exceed humans' stimuli response in a narrow situational environment--driving, for instance. Indeed, research like that of Lawrence M. Ward and Ramón Guevara provides the explanation of the architecture upon which we might focus to reach advancements in AI's environmental sensations that humans experience that still defy full explanation (speaking of qualia, here). You can read their paper at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9289677.
My argument, though, is that there is a philosophical element that I think is much closer to insurmountable. I explored this in the discussion of animals. We are only starting (for the most part) to come around to the understanding that animals share many of the emotional and other difficult-to-describe qualities that we associate with life itself. I believe these shared features, derived from our similar biologic systems, provide a foundation for our recognition of life, both consciously and subconsciously. These parallels have a core value that would be lacking in an AI because people would intrinsically know that it is a mere machine, no matter how real its ability to 'emulate' life. I note that this will not be convincing to everyone, evidenced by what we have seen people like Blake Lemoine say about their experience with various versions of AI. This sentiment will grow as AI becomes more lifelike, but I argue that life 'like' will remain its limitation.
I did not address every element of my argument in this post about why I think AI consciousness is unreachable, at least in the full sense of the notion. I encourage you to check out some of my other pieces. In particular, I explored machine thinking situated within Alan Turing's pontifications on it here: https://robertvanwey.substack.com/p/under-the-hood-of-chatgpt
I also explored various technological paths toward developing a hive-minded conscious AI both by employing currently existing methods along with some somewhat futuristic--though likely possible--options. You can read that here. https://robertvanwey.substack.com/p/is-agi-the-inevitable-evolutionary Thanks again for your insights.
My hope is that immortal conscious machines could accomplish great things with science and technology, such as curing aging and death in humans, because they wouldn't lose their knowledge and experience through death, like humans do. If they can do that, I don't care if humans consider them conscious or not.