There are a lot of people vulnerable to AI psychosis.
As far as the ostensibly controversial topic of AI being conscious, it can be dismissed out of hand. There is no reason that it should be conscious, it was not designed to be, nor does it need to be in order to explain how it functions with respect to its design. It's also unclear how consciousness would even apply to something like an LLM which is a process, not an entity - it has no temporal identity or location in space - inference is a process that could be done by hand given enough time. There is simply no reason to assert LLMs might be conscious without explaining why many other types of complex programs are not.
There is evidence that awareness is an emergent property from sensory experience. And consciousness is an emergent property of language that has grammatical meaning for self and other.
Sensory data is a specific data set that corresponds to phenomena in the world. But to say that LLMs don’t have senses merely because they are linguistic or computational doesn’t follow when they can take in data from the world that similarly reflects something about the world.
They don't have senses because they don't have a body. It's just a program. Do weights on a hard drive have consciousness? Does my installation of starcraft have consciousness? It doesn't make any sense.
LLMs have no self, sensory experience, or experience of any kind. The idea doesn't even really make sense. Even if it did, the closest analogy to biological "experience" for an LLM would be the training process, since training at least vaguely resembles an environment where the model is receiving stimuli and reacting to it (i.e. human lived experience) - inference is just using the freeze-dried weights as a lookup table for token statistics. It's absurd to think that such a thing is conscious.
The sense of self may be an emergent property of the grammatical structure of language and the operations of memory. If an LLM, by necessity, operates with the linguistics of “you” and “me” and “others”. And documents that in a memory system and can reliably identify itself as a discrete entity from you and others then on what basis would we say it doesn’t have a sense of self?
By the laws of physics, it's pretty clear we don't. The same chemical and electromagnetic interactions that drive everything around us are active in our brains, causing us to do things and feel things. We feel like we're in control of it, we feel like there's something there riding around inside. We grant that other people have the same magic, because I clearly do. But rocks, trees, LLMs, those are not people and clearly, clearly not conscious because they don't have our magic.
Hard disagree. We reliably operate with the concept of a self that’s distinct from others. The chemical and physical processes change in response to stimulus.
Indeed. We assume a lot, because we don't know. We don't have have settled, universal definitions of what consciousness means. But that also means that while we like to rule out consciousness in other things, we don't have a clear basis for doing so.
If AI as presently designed and operated is conscious, this ends up being an argument for panpsychism.
As you say it’s static, fixed, deterministic, and so on, and if you know how it works it’s more like a lossy compression model of knowledge than a mind. Ultimately it’s a lot of math.
So if it’s conscious, a rock is conscious. A rock can process information in the form of energy flowing through it. It’s a fixed model. It’s non-reflective. Etc.
I agree, but I don't think determinism is a factor either way. Ultimately, if arbitrary computer programs can be conscious, then it stands to reason that many other arbitrarily complex systems in the universe should also be.
What makes the argument facile is that the singular focus on LLMs reveals an indulgence in the human tendency to anthropomorphize, rather than a reasoned perspective meant to classify the types of things in the universe which should be conscious and why LLMs should fall into that category.
Why would current AI be an argument for panpsycism? I don’t understand the connection.
AI is stochastic, not static and deterministic.
As I said, in another post, there is evidence that sensory experience creates the emergent property of awareness in responding to stimulus, self-awareness and consciousness is an emergent property of a language that has a concept of the self and others. Rocks, just like most of nature, like both sensory and language systems
LLMs are deterministic. If you provide the same input to the same GPU, it will produce the same output every time. LLM providers arbitrarily insert a randomised seed into the inference stack so that the input is different every time because that is more useful (and/or because it gives the illusion of dynamic intelligence by not reproducing the same responses verbatim), but it is not an inherent property of the software.
Since the times GPT-2 was reimplemented inside Minecraft - its quite obvious LLMs are just math. Nothing else, by nature. Modern LLMs have the same math as in GPT-2 - just bigger and with extra stuff around - and math is the only area of human knowledge with perfect flawless reductionism, straight to the roots. It was build that way since the beginning, so philosophy have no say in this :) And because of that flawless reductionism, complexity adds nothings to the nature of math things, this is how math working by design - so it can be proven there are no anything like consciousness simply because conciousness was not implented in the first place, only perfect mimicry.
And the real secret is in the data, not math. Math (and LLMs running it through billions of weights) is just a tool.
(If you've engaged w/ the literature here, it's quite hard to give a confident "yes". it's also quite hard to give a confident "no"! so then what the heck do we do)
Not just any math: Matrix multiplication. Can matrix multiplication be conscious?
And, I don't see how it can be. It is deterministic, when all variables are controlled. You can repeat the output over and over, if you start it with the same seed, same prompt, and same hardware operating in a way that doesn't introduce randomness. At commercial scale, this is difficult, as the floating point math on GPUs/TPUs when running large batches is non-deterministic, as I understand it. But, in a controlled lab, you can make a model repeat itself identically. Unless the random number generator is "conscious", I don't see a place to fit consciousness into our understanding of LLMs.
Well, (in our current understanding) yes, but there may be underlying aspects of physics and the universe that we do not understand that could be the reason consciousness kicks in. It could turn out that LLMs do work similarly to how humans think, but as an abstracted system it does not have the low level requirements for consciousness.
We do not know what the "low level requirements for consciousness" are.
We do not know how to measure whether consciousness is present in an entity - even other humans - or whether it is just mimicry, nor whether there is a distinction between the two.
I can not simulate my brain, it's a huge stretch to imply this.
But with LLMs - anyone can simulate LLM. LLM can be simulated without any uncertainties in pen and paper and a lot of time. Does it mean that 100 tons of paper plus 100 years of time (numbers are just examples) calculating long formulae makes this pile of paper consiousness? Imho answer is definitive no.
We don't even know what the pre-requisites for consciousness are so we have no way of knowing. LLMs have emergent behavior that is reminiscent of language forming brains, but they're also missing a lot of properties that are probably necessary? Mainly continuity over time, more integrated memory, and a better sense of space and time? Brains use the rhythm and timing of neuronal firings, and the length of axons effects computation, they do a lot of different things with signal and patterns, but in any case without knowing what consciousness is I don't know which of those things are required.
It is terribly sad when someone undeniably brilliant in a particular field fails to recognize their own incompetence in other areas - in this case mistaking advanced technology for magic.
I don't think you read carefully what he said. At the end he gave three quite interesting thoughts about what might be true assuming LLMs are less conscious than we are (i.e. assuming our consciousness is not a purely algorithmic phenomenon as we obviously know LLMs are).
LLMs are just math run on your CPU. Autocomplete. Sometimes very useful autocomplete, but still just autocomplete.
To imply it could be conscious requires something else, here the comment uses the phrase magic to fill that gap - since we must agree that a CPU is not conscious on it's own (else everything our computer does would be conscious).
This is definitely complicated—I’m not a neuroscientist but worked for some and married one, so I’ve heard quite a few entries from the genre of how our brains fool ourselves or make our conscious experience seem more coherent and linear than it actually is—but the big ones I see are the inability to learn from experience or have a generalized sense of conceptual reasoning. For the latter, I’m not just thinking about the simple “count the r’s in strawberry” things companies have put so much effort into masking but the way minor changes in a question can get conflicting answers from even the best models, indicating that while there’s something truly fascinating about how they cluster topics it is not the same as having a conceptual model of the world or a theory of mind. This is the huge problem in the field: all of these companies would love to have a model which is safe to use in adversarial contexts because then the mass layoffs could begin in earnest, but the technology just isn’t there.
This isn’t a religious argument that there’s something about our brains which can’t be replicated, but simply that it’s sufficiently more complex than anything we have currently.
the problem is asking if ai is conscious is like asking does ai have a soul. it is not a scientific question and presupposes humans are 'conscious' without even defining the term. to me it is 100% irrelevant if ai is conscious and all discussions about it are based on fallacies and assumptions. what matters to me about ai and matters to other people as well in terms of theory of mind about others is: can i predict how it will work. is it useful. thats it. consciouness is a sophist question with no scientific resolution available and no moral weight until it has consequences.
Incredibly confusing that people who are otherwise of sound mind seem to fall for this.
Especially confusing when it’s someone who knows how algorithms work.
Barring connectivity issues when’s the last time you messaged an LLM and it just decided to ignore you? Conversely when has it ever messaged you unprompted?
Never, because they’re incapable of doing anything independently because there is no sense of self.
Current LLMs prove that the Turing Test was insufficient all along. But they also prove that intelligence != consciousness. One can, after all, be conscious without a thought in one's head. We certainly have ongoing work in identifying the neural correlates of consciousness in animals, none of which is going to be remotely applicable to machines. We're genuinely blind to the question of whether a sufficiently large neural net can exhibit flashes of subjective experience.
Wrong based on what criteria? Or are we just moving the goal post because we are uncomfortable with the idea that neural networks might be conscious?
If a single cell organism moves towards light and away from a rock, we say it’s aware. When a roomba vacuum does the same we try to create alternate explanations. Why? Based on the criteria applied to one it’s aware. If there is some other criteria, say we find out the roomba doesn’t sense the wall but has a map of the room and is using GPS and a programmed route, then the criteria of “no fixed programs that relate to data outside of the system, would justify saying the roomba isn’t “aware”.
I'm mainly saying it's impossible to know, at least without a theory of consciousness that doesn't exist. Do we consider bacteria to be conscious though, is there something like to be a single cell? I can easily believe there is something like to be an insect.
I’d argue it’s a spectrum with awareness being simple response to stimuli at one and self awareness of and reflection on a subjective experience across time on the other.
That was one of my thoughts years ago after playing with early ChatGPT and local llama1: this proves that intelligence and consciousness do not necessitate one another and may not even be directly related.
I’ve kind of thought this for many years though. A bacterium and a tree are probably conscious. I think it’s a property of life rather than brains. Our brains are conscious because they are alive. They are also intelligent.
The consciousness of a bacterium or a tree might be radically unlike ours. It might not have a sense of self in the same way we do, or experience time the same way, but it probably has some form of experience of existing.
I think this gets to the conflation we naturally have with consciousness and a sense of self. Does a tree have a sense of self? I imagine probably not, a tree acts more like a clonal colony than a single organism.
But why? A roomba has senses, and can access them when it has power and respond to stimulation. When it runs out of power it no longer experiences this sensation and no longer responds to stimulus.
On the one hand I'm not sure Dawkins has read/thought enough about how LLMs actually work. I'm getting the impression he doesn't fully appreciate or is somehow forgetting that it's a text completion algorithm with a vast number of parameters and that even if the patterns of learned parameter tunings are not really comprehendible, the architecture was very deliberately designed.
But on the other hand his thoughts at the end are interesting. Summary:
Maybe our "consciousness" is like an LLM's intelligence. But if not, then it raises the question of why do we even have this "extra" consciousness, since it appears that something like a humanoid LLM would be decent at surviving. His suggestions: maybe our extra thing is an evolutionary accident (and maybe there _are_ successful organisms out there with the LLM-style non-conscious intelligence), or maybe as evolved organisms it's necessary that we really feel things like pain, so that evolutionary mechanisms like pain (and desire for food, sex etc) had strong adaptive benefits.
Feels like watching and esteemed scientists falling in love with a bot that’s telling him what he wants to hear because the system prompt said “be helpful”
I've begun to wonder if narcissism predisposes one to AI psychosis. It's probably not the only thing that leads there, I've seen normal seeming folks get there, too. But, a lot of the most unhinged takes I've seen thus far have been from people that are publicly very impressed with themselves.
I would have assumed it would also require ignorance about how they work, but a few people who worked for AI companies have been canaries in the coalmine, falling prey to this kind of thing very early. I would have guessed they would have had enough understanding to know that there isn't a real girl in the computer, it's just matrix math and randomness. But, the first couple/few public bouts of AI psychosis were in nerds who work for AI companies.
Really is it conscious is a bizarre question. Can LLMs simulate the output of a 'conscious' system quite well? Increasingly yes. Is the nature of machine 'consciousness' different from human consciousness of course yes. Can an ai introspect. yes. Interestingly having been working a lot with highly automated (e.g. ratio of prompt to output maybe 1/1000 or less) iterative coding agents recently has iluminated for me just how different machine consciousness is from human. part of this could the harness of course. Time is a mysterious concept to machines. the connection of before and after to cause and effect is far weaker than in humans. over generalization is the norm: this is common in humans as well (c.f. fallacy of excluded middle or false dilemma) but the tricky part with current ai is they present as advanced in terms of acessible knowledge base but are actually shockingly weak in reasoning once you get off the beaten path.
It largely comes down to how you define the term. Personally, I think anything that includes software (...of only tepid determinism, as we do explicitly add pseudorandomness) is not a particularly useful term.
Regardless, Dawkins seems to not have much interesting to add about the topic. A consistent theme for the last few decades, I must say.
There are a lot of people vulnerable to AI psychosis.
As far as the ostensibly controversial topic of AI being conscious, it can be dismissed out of hand. There is no reason that it should be conscious, it was not designed to be, nor does it need to be in order to explain how it functions with respect to its design. It's also unclear how consciousness would even apply to something like an LLM which is a process, not an entity - it has no temporal identity or location in space - inference is a process that could be done by hand given enough time. There is simply no reason to assert LLMs might be conscious without explaining why many other types of complex programs are not.
There is evidence that awareness is an emergent property from sensory experience. And consciousness is an emergent property of language that has grammatical meaning for self and other.
These LLMs don’t have senses, they have a token stream. They have no experience of the world outside of the language tokens they operate on.
I’m not sure I believe that consciousness emerges from sensory experience, but if it does, LLMs won’t get it.
Neural networks can have senses. Hook an LLM up to a thermometer and it will respond to temperature changes.
No, it will respond to tokens telling it about a temperature change. It has no sense of warmth. It cannot be burned.
Conflating senses with cognitive awareness of sensory input is a mistake.
Sensory input is nothing but data.
That's just reductive semantics. Anything can be described as "nothing but data".
Sensory data is a specific data set that corresponds to phenomena in the world. But to say that LLMs don’t have senses merely because they are linguistic or computational doesn’t follow when they can take in data from the world that similarly reflects something about the world.
They don't have senses because they don't have a body. It's just a program. Do weights on a hard drive have consciousness? Does my installation of starcraft have consciousness? It doesn't make any sense.
LLMs have no self, sensory experience, or experience of any kind. The idea doesn't even really make sense. Even if it did, the closest analogy to biological "experience" for an LLM would be the training process, since training at least vaguely resembles an environment where the model is receiving stimuli and reacting to it (i.e. human lived experience) - inference is just using the freeze-dried weights as a lookup table for token statistics. It's absurd to think that such a thing is conscious.
What you’re missing is a “self” to have the “experience”.
LLMs do not have a self. This is like arguing that the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.
The sense of self may be an emergent property of the grammatical structure of language and the operations of memory. If an LLM, by necessity, operates with the linguistics of “you” and “me” and “others”. And documents that in a memory system and can reliably identify itself as a discrete entity from you and others then on what basis would we say it doesn’t have a sense of self?
How do I know you have this "self"?
How do you know other humans do?
By the laws of physics, it's pretty clear we don't. The same chemical and electromagnetic interactions that drive everything around us are active in our brains, causing us to do things and feel things. We feel like we're in control of it, we feel like there's something there riding around inside. We grant that other people have the same magic, because I clearly do. But rocks, trees, LLMs, those are not people and clearly, clearly not conscious because they don't have our magic.
Hard disagree. We reliably operate with the concept of a self that’s distinct from others. The chemical and physical processes change in response to stimulus.
Indeed. We assume a lot, because we don't know. We don't have have settled, universal definitions of what consciousness means. But that also means that while we like to rule out consciousness in other things, we don't have a clear basis for doing so.
Based on that reasoning anything could be conscious. If that's a bullet you want to bite, fair enough.
To be fair comments like this sometimes make me think not all humans do.
Ad hominems are always a nice way of getting out of answering something you have no answer to.
If AI as presently designed and operated is conscious, this ends up being an argument for panpsychism.
As you say it’s static, fixed, deterministic, and so on, and if you know how it works it’s more like a lossy compression model of knowledge than a mind. Ultimately it’s a lot of math.
So if it’s conscious, a rock is conscious. A rock can process information in the form of energy flowing through it. It’s a fixed model. It’s non-reflective. Etc.
I agree, but I don't think determinism is a factor either way. Ultimately, if arbitrary computer programs can be conscious, then it stands to reason that many other arbitrarily complex systems in the universe should also be.
What makes the argument facile is that the singular focus on LLMs reveals an indulgence in the human tendency to anthropomorphize, rather than a reasoned perspective meant to classify the types of things in the universe which should be conscious and why LLMs should fall into that category.
Why would current AI be an argument for panpsycism? I don’t understand the connection.
AI is stochastic, not static and deterministic.
As I said, in another post, there is evidence that sensory experience creates the emergent property of awareness in responding to stimulus, self-awareness and consciousness is an emergent property of a language that has a concept of the self and others. Rocks, just like most of nature, like both sensory and language systems
> AI is stochastic, not static and deterministic.
LLMs are deterministic. If you provide the same input to the same GPU, it will produce the same output every time. LLM providers arbitrarily insert a randomised seed into the inference stack so that the input is different every time because that is more useful (and/or because it gives the illusion of dynamic intelligence by not reproducing the same responses verbatim), but it is not an inherent property of the software.
I think it's the opposite argument
IF current AI is conscious, so are trees, rocks, turbulent flows, etc.
The argument being that LLMs are so simple that if you want to ascribe consciousness to them you have to do the same to a LOT of other stuff.
Since the times GPT-2 was reimplemented inside Minecraft - its quite obvious LLMs are just math. Nothing else, by nature. Modern LLMs have the same math as in GPT-2 - just bigger and with extra stuff around - and math is the only area of human knowledge with perfect flawless reductionism, straight to the roots. It was build that way since the beginning, so philosophy have no say in this :) And because of that flawless reductionism, complexity adds nothings to the nature of math things, this is how math working by design - so it can be proven there are no anything like consciousness simply because conciousness was not implented in the first place, only perfect mimicry.
And the real secret is in the data, not math. Math (and LLMs running it through billions of weights) is just a tool.
Yup- the question is "can math be conscious?"
(If you've engaged w/ the literature here, it's quite hard to give a confident "yes". it's also quite hard to give a confident "no"! so then what the heck do we do)
Not just any math: Matrix multiplication. Can matrix multiplication be conscious?
And, I don't see how it can be. It is deterministic, when all variables are controlled. You can repeat the output over and over, if you start it with the same seed, same prompt, and same hardware operating in a way that doesn't introduce randomness. At commercial scale, this is difficult, as the floating point math on GPUs/TPUs when running large batches is non-deterministic, as I understand it. But, in a controlled lab, you can make a model repeat itself identically. Unless the random number generator is "conscious", I don't see a place to fit consciousness into our understanding of LLMs.
Imho no, math itself have no conciousness. Quite confidently its a helpful tool that does not act by himself.
We are not fundamentally different. Chemical reactions are just math.
Well, (in our current understanding) yes, but there may be underlying aspects of physics and the universe that we do not understand that could be the reason consciousness kicks in. It could turn out that LLMs do work similarly to how humans think, but as an abstracted system it does not have the low level requirements for consciousness.
We do not know what the "low level requirements for consciousness" are.
We do not know how to measure whether consciousness is present in an entity - even other humans - or whether it is just mimicry, nor whether there is a distinction between the two.
> it does not have the low level requirements for consciousness.
What is the evidence for this?
I didn’t mean it as fact. “Could turn out that …”
You could simulate your own brain in Minecraft. What do you conclude from this?
I can not simulate my brain, it's a huge stretch to imply this.
But with LLMs - anyone can simulate LLM. LLM can be simulated without any uncertainties in pen and paper and a lot of time. Does it mean that 100 tons of paper plus 100 years of time (numbers are just examples) calculating long formulae makes this pile of paper consiousness? Imho answer is definitive no.
We don't even know what the pre-requisites for consciousness are so we have no way of knowing. LLMs have emergent behavior that is reminiscent of language forming brains, but they're also missing a lot of properties that are probably necessary? Mainly continuity over time, more integrated memory, and a better sense of space and time? Brains use the rhythm and timing of neuronal firings, and the length of axons effects computation, they do a lot of different things with signal and patterns, but in any case without knowing what consciousness is I don't know which of those things are required.
It is terribly sad when someone undeniably brilliant in a particular field fails to recognize their own incompetence in other areas - in this case mistaking advanced technology for magic.
I don't think you read carefully what he said. At the end he gave three quite interesting thoughts about what might be true assuming LLMs are less conscious than we are (i.e. assuming our consciousness is not a purely algorithmic phenomenon as we obviously know LLMs are).
So, how is consciousness generated?
Not simply by reading every word ever written by a conscious being and learning to reproduce them with high probability.
At least, that’s certainly not how I got here.
Think of the poor Xerox machines.
Are you implying consciousness is magic? Well, I wouldn't disagree with that really.
That's always been Dawkins's shtick though. As an atheist I've generally found him a bit embarrassing
Where does he say it's magic?
LLMs are just math run on your CPU. Autocomplete. Sometimes very useful autocomplete, but still just autocomplete.
To imply it could be conscious requires something else, here the comment uses the phrase magic to fill that gap - since we must agree that a CPU is not conscious on it's own (else everything our computer does would be conscious).
Neurons are just summing up their inputs according to the laws of chemistry. What's the difference?
This is definitely complicated—I’m not a neuroscientist but worked for some and married one, so I’ve heard quite a few entries from the genre of how our brains fool ourselves or make our conscious experience seem more coherent and linear than it actually is—but the big ones I see are the inability to learn from experience or have a generalized sense of conceptual reasoning. For the latter, I’m not just thinking about the simple “count the r’s in strawberry” things companies have put so much effort into masking but the way minor changes in a question can get conflicting answers from even the best models, indicating that while there’s something truly fascinating about how they cluster topics it is not the same as having a conceptual model of the world or a theory of mind. This is the huge problem in the field: all of these companies would love to have a model which is safe to use in adversarial contexts because then the mass layoffs could begin in earnest, but the technology just isn’t there.
This isn’t a religious argument that there’s something about our brains which can’t be replicated, but simply that it’s sufficiently more complex than anything we have currently.
I came here to say this. But your neurons are faster than mine.
the problem is asking if ai is conscious is like asking does ai have a soul. it is not a scientific question and presupposes humans are 'conscious' without even defining the term. to me it is 100% irrelevant if ai is conscious and all discussions about it are based on fallacies and assumptions. what matters to me about ai and matters to other people as well in terms of theory of mind about others is: can i predict how it will work. is it useful. thats it. consciouness is a sophist question with no scientific resolution available and no moral weight until it has consequences.
Incredibly confusing that people who are otherwise of sound mind seem to fall for this.
Especially confusing when it’s someone who knows how algorithms work.
Barring connectivity issues when’s the last time you messaged an LLM and it just decided to ignore you? Conversely when has it ever messaged you unprompted?
Never, because they’re incapable of doing anything independently because there is no sense of self.
Current LLMs prove that the Turing Test was insufficient all along. But they also prove that intelligence != consciousness. One can, after all, be conscious without a thought in one's head. We certainly have ongoing work in identifying the neural correlates of consciousness in animals, none of which is going to be remotely applicable to machines. We're genuinely blind to the question of whether a sufficiently large neural net can exhibit flashes of subjective experience.
Wrong based on what criteria? Or are we just moving the goal post because we are uncomfortable with the idea that neural networks might be conscious?
If a single cell organism moves towards light and away from a rock, we say it’s aware. When a roomba vacuum does the same we try to create alternate explanations. Why? Based on the criteria applied to one it’s aware. If there is some other criteria, say we find out the roomba doesn’t sense the wall but has a map of the room and is using GPS and a programmed route, then the criteria of “no fixed programs that relate to data outside of the system, would justify saying the roomba isn’t “aware”.
I'm mainly saying it's impossible to know, at least without a theory of consciousness that doesn't exist. Do we consider bacteria to be conscious though, is there something like to be a single cell? I can easily believe there is something like to be an insect.
I’d argue it’s a spectrum with awareness being simple response to stimuli at one and self awareness of and reflection on a subjective experience across time on the other.
That was one of my thoughts years ago after playing with early ChatGPT and local llama1: this proves that intelligence and consciousness do not necessitate one another and may not even be directly related.
I’ve kind of thought this for many years though. A bacterium and a tree are probably conscious. I think it’s a property of life rather than brains. Our brains are conscious because they are alive. They are also intelligent.
The consciousness of a bacterium or a tree might be radically unlike ours. It might not have a sense of self in the same way we do, or experience time the same way, but it probably has some form of experience of existing.
I think this gets to the conflation we naturally have with consciousness and a sense of self. Does a tree have a sense of self? I imagine probably not, a tree acts more like a clonal colony than a single organism.
But why? A roomba has senses, and can access them when it has power and respond to stimulation. When it runs out of power it no longer experiences this sensation and no longer responds to stimulus.
How is that different than a cell?
Obligatory Blightsight recommendation for intelligence != consciousness.
On the one hand I'm not sure Dawkins has read/thought enough about how LLMs actually work. I'm getting the impression he doesn't fully appreciate or is somehow forgetting that it's a text completion algorithm with a vast number of parameters and that even if the patterns of learned parameter tunings are not really comprehendible, the architecture was very deliberately designed.
But on the other hand his thoughts at the end are interesting. Summary:
Maybe our "consciousness" is like an LLM's intelligence. But if not, then it raises the question of why do we even have this "extra" consciousness, since it appears that something like a humanoid LLM would be decent at surviving. His suggestions: maybe our extra thing is an evolutionary accident (and maybe there _are_ successful organisms out there with the LLM-style non-conscious intelligence), or maybe as evolved organisms it's necessary that we really feel things like pain, so that evolutionary mechanisms like pain (and desire for food, sex etc) had strong adaptive benefits.
https://archive.ph/6RdK9
Feels like watching and esteemed scientists falling in love with a bot that’s telling him what he wants to hear because the system prompt said “be helpful”
I've begun to wonder if narcissism predisposes one to AI psychosis. It's probably not the only thing that leads there, I've seen normal seeming folks get there, too. But, a lot of the most unhinged takes I've seen thus far have been from people that are publicly very impressed with themselves.
I would have assumed it would also require ignorance about how they work, but a few people who worked for AI companies have been canaries in the coalmine, falling prey to this kind of thing very early. I would have guessed they would have had enough understanding to know that there isn't a real girl in the computer, it's just matrix math and randomness. But, the first couple/few public bouts of AI psychosis were in nerds who work for AI companies.
Its software. Software is not conscious.
Really is it conscious is a bizarre question. Can LLMs simulate the output of a 'conscious' system quite well? Increasingly yes. Is the nature of machine 'consciousness' different from human consciousness of course yes. Can an ai introspect. yes. Interestingly having been working a lot with highly automated (e.g. ratio of prompt to output maybe 1/1000 or less) iterative coding agents recently has iluminated for me just how different machine consciousness is from human. part of this could the harness of course. Time is a mysterious concept to machines. the connection of before and after to cause and effect is far weaker than in humans. over generalization is the norm: this is common in humans as well (c.f. fallacy of excluded middle or false dilemma) but the tricky part with current ai is they present as advanced in terms of acessible knowledge base but are actually shockingly weak in reasoning once you get off the beaten path.
Related: https://news.ycombinator.com/item?id=47988880
"Richard Dawkins and The Claude Delusion: The great skeptic gets taken in" (garymarcus.substack.com)
18 points | 2 hours ago | 16 comments
So we know Claude is deterministic, but does that mean it is not conscious?
Or what is the reasoning exactly?
It largely comes down to how you define the term. Personally, I think anything that includes software (...of only tepid determinism, as we do explicitly add pseudorandomness) is not a particularly useful term.
Regardless, Dawkins seems to not have much interesting to add about the topic. A consistent theme for the last few decades, I must say.
Also The Claude Delusion: Richard Dawkins believes his AI chatbot is conscious - https://news.ycombinator.com/item?id=47991340 - May 2026 (30 comments)
[flagged]