I'm interested in this topic, but it seems to me that the entire scientific pursuit of copying the human brain is absurd from start to finish. Any attempt to do so should be met with criminal prosecution and immediate arrest of those involved. Attempting to copy the human brain or human consciousness is one of the biggest mistakes that can be made in the scientific field.
We must preserve three fundamental principles:
* our integrity
* our autonomy
* our uniqueness
These three principles should form the basis of a list of laws worldwide that prohibit cloning or copying human consciousness in any form or format. This principle should be fundamental to any attempts to research or even try to make copies of human consciousness.
Just as human cloning was banned, we should also ban any attempts to interfere with human consciousness or copy it, whether partially or fully. This is immoral, wrong, and contradicts any values that we can call the values of our civilization.
Lena is no longer used as a test image because it's porn. It's banned from several journals because it's porn. As in they will reject any paper that uses Lena no matter the technical content.
The reasons usually given for choosing this image are all just rationalisations — Lena is used the most because it's porn and image compression researchers are all male. It belongs as part of a test set, sure, but there's no reason it should be the single most used image. Except because its porn.
The woman herself says she never had a problem with it being famous. The actual test image is obviously not porn, either. But anything to look progressive, I guess.
> Lena is no longer used as a test image because it's porn.
The Lenna test image can be seen over the text "Click above for the original as a TIFF image." at [0]. If you consider that to be porn, then I find your opinion on what is and is not porn to be worthless.
The test image is a cropped portion of porn, but if a safe-for-work image would be porn but for what you can't see in the image, then any picture of any human ever is porn as we're all nude under our clothes.
For additional commentary (published in 1996) on the history and controversy about the image, see [1].
qntm is really talented sci-fi writer. I have read Valuable Humans in Transit and There is no Antimemetics division and both were great, if short. Can only recommend.
I remember being very taken with this story when I first read it, and it's striking how obsolete it reads now. At the time it was written, "simulated humans" seemed a fantastical suggestion for how a future society might do scaled intellectual labor, but not a ridiculous suggestion.
But now with modern LLMs it's just too impossible to take it seriously. It was a live possibility then; now, it's just a wrong turn down a garden path.
A high variance story! It could have been prescient, instead it's irrelevant.
This is a sad take, and a misunderstanding of what art is. Tech and tools go "obsolete". Literature poses questions to humans, and the value of art remains to be experienced by future readers, whatever branch of the tech tree we happen to occupy. I don't begrudge Clarke or Vonnegut or Asimov their dated sci-fi premises, because prediction isn't the point.
The role of speculative fiction isn't to accurately predict what future tech will be, or become obsolete.
That is the same categorical argument as what the story is about: scanned brains are not perceived as people so can be “tasked” without affording moral consideration. You are saying because we have LLMs, categorically not people, we would never enter the moral quandaries of using uploaded humans in that way since we can just use LLMs instead.
But… why are LLMs not worthy of any moral consideration? That question is a bit of a rabbit hole with a lot of motivated reasoning on either side of the argument, but the outcome is definitely not settled.
For me this story became even more relevant since the LLM revolution, because we could be making the exact mistake humanity made in the story.
I think that's a little harsh. A lot of the most powerful bits are applicable to any intelligence that we could digitally (ergo casually) instantiate or extinguish.
While it may seem that the origin of those intelligences is more likely to be some kind of reinforcement-learning algorithm trained on diverse datasets instead of a simulation of a human brain, the way we might treat them isn't any less though provoking.
when you read this and its follow-up "driver" as a commentary on how capitalism removes persons from their humanity, it's as relevant as it was on day one.
I actually think it was quite prescient and still raises important topics to consider - irrespective of whether weights are uploaded from an actual human, if you dig just a little bit under the surface details, you still get a story about ethical concerns of a purely digital sentience. Not that modern LLMs have that, but what if future architectures enable them to grow an emerging sense of self? It's a fascinating text.
I'm interested in this topic, but it seems to me that the entire scientific pursuit of copying the human brain is absurd from start to finish. Any attempt to do so should be met with criminal prosecution and immediate arrest of those involved. Attempting to copy the human brain or human consciousness is one of the biggest mistakes that can be made in the scientific field.
We must preserve three fundamental principles: * our integrity * our autonomy * our uniqueness
These three principles should form the basis of a list of laws worldwide that prohibit cloning or copying human consciousness in any form or format. This principle should be fundamental to any attempts to research or even try to make copies of human consciousness.
Just as human cloning was banned, we should also ban any attempts to interfere with human consciousness or copy it, whether partially or fully. This is immoral, wrong, and contradicts any values that we can call the values of our civilization.
I always laugh at such fantasies.
You can't copy something you have not even the slightest idea about: and nobody at the moment knows what consciousness is.
We as humanity didn't even start going on the (obviously) very long path of researching and understanding what consciousness is.
Same person who wrote SCP Antimemetics Division which is great too
It’s named after the multi-decade data compression test image https://en.wikipedia.org/wiki/Lenna
Buy the book! https://qntm.org/vhitaos
That feels grossly inappropriate
If you read the original text, what happens in that story is also grossly inappropriate. Maybe that's the parallel.
could you be more specific?
Lena is no longer used as a test image because it's porn. It's banned from several journals because it's porn. As in they will reject any paper that uses Lena no matter the technical content.
The reasons usually given for choosing this image are all just rationalisations — Lena is used the most because it's porn and image compression researchers are all male. It belongs as part of a test set, sure, but there's no reason it should be the single most used image. Except because its porn.
The woman herself says she never had a problem with it being famous. The actual test image is obviously not porn, either. But anything to look progressive, I guess.
> Lena is no longer used as a test image because it's porn.
The Lenna test image can be seen over the text "Click above for the original as a TIFF image." at [0]. If you consider that to be porn, then I find your opinion on what is and is not porn to be worthless.
The test image is a cropped portion of porn, but if a safe-for-work image would be porn but for what you can't see in the image, then any picture of any human ever is porn as we're all nude under our clothes.
For additional commentary (published in 1996) on the history and controversy about the image, see [1].
[0] <http://www.lenna.org/>
[1] <https://web.archive.org/web/20010414202400/http://www.nofile...>
qntm is really talented sci-fi writer. I have read Valuable Humans in Transit and There is no Antimemetics division and both were great, if short. Can only recommend.
I remember being very taken with this story when I first read it, and it's striking how obsolete it reads now. At the time it was written, "simulated humans" seemed a fantastical suggestion for how a future society might do scaled intellectual labor, but not a ridiculous suggestion.
But now with modern LLMs it's just too impossible to take it seriously. It was a live possibility then; now, it's just a wrong turn down a garden path.
A high variance story! It could have been prescient, instead it's irrelevant.
This is a sad take, and a misunderstanding of what art is. Tech and tools go "obsolete". Literature poses questions to humans, and the value of art remains to be experienced by future readers, whatever branch of the tech tree we happen to occupy. I don't begrudge Clarke or Vonnegut or Asimov their dated sci-fi premises, because prediction isn't the point.
The role of speculative fiction isn't to accurately predict what future tech will be, or become obsolete.
100% agree, but I relish the works of Willam Gibson and Burroughs who pose those questions AND getting the future somewhat right.
Yeah, that's like saying Romeo and Juliet by Shakespeare is obsolete because Romeo could have just sent Juliet a snapchat message.
You're kinda missing the entire point of the story.
That is the same categorical argument as what the story is about: scanned brains are not perceived as people so can be “tasked” without affording moral consideration. You are saying because we have LLMs, categorically not people, we would never enter the moral quandaries of using uploaded humans in that way since we can just use LLMs instead.
But… why are LLMs not worthy of any moral consideration? That question is a bit of a rabbit hole with a lot of motivated reasoning on either side of the argument, but the outcome is definitely not settled.
For me this story became even more relevant since the LLM revolution, because we could be making the exact mistake humanity made in the story.
I think that's a little harsh. A lot of the most powerful bits are applicable to any intelligence that we could digitally (ergo casually) instantiate or extinguish.
While it may seem that the origin of those intelligences is more likely to be some kind of reinforcement-learning algorithm trained on diverse datasets instead of a simulation of a human brain, the way we might treat them isn't any less though provoking.
when you read this and its follow-up "driver" as a commentary on how capitalism removes persons from their humanity, it's as relevant as it was on day one.
good sci fi is rarely about just the sci part.
I actually think it was quite prescient and still raises important topics to consider - irrespective of whether weights are uploaded from an actual human, if you dig just a little bit under the surface details, you still get a story about ethical concerns of a purely digital sentience. Not that modern LLMs have that, but what if future architectures enable them to grow an emerging sense of self? It's a fascinating text.
“Irrelevant” feels a bit reductive while the practical question of what actually causes qualia remains unresolved.
Lena isn't about uploading. https://qntm.org/uploading
I have not seen as prediction as actual technology, but mostly as a horror story.
And a warning, I guess, in unlikely case of brain uploading being a thing.
Found the guy who didn't play SOMA ;)