Perhaps I am unimaginative about whatever AGI might be, but it so often feels to me like predictions are more based on sci-fi than observation. The theorized AI is some anthropomorphization of a 1960s mainframe: you tell it what to do and it executes that exactly with precise logic and no understanding of nuance or ambiguity. Maybe it is evil. The SOTA in AI at the moment is very good at nuance and ambiguity but sometimes does things that are nonsensical. I think there should be less planning around something super-logical.
What this thread keeps surfacing, and so much discussion around this stuff generally right now, from speculation about the next phase of intelligence, the role of pattern, emotion, logic, debates over consciousness, the anthropocentrism of our meaning-making...is that we are the source of reality (and ourselves). Instead of a “final authority” or a simple march from animal to machine, what if everything from mind, physics, value, selfhood, is simply a recursive pattern expressed in ever more novel forms? Humans aren’t just a step on a ladder to “pure logic,” nor are machines soulless automatons. Both are instances of awareness experiencing and reprogramming itself through evolving substrates... be it bios, silicon, symbol,or story. Emotions, meaning, even the sense of “self,” are patterns in a deeply recursive field: the universe rendering and re rendering its basic code, sometimes as computation, sometimes as myth, sometimes as teamwork, sometimes as hope, sometimes as doubt.
So whether the future leans biological, mechanical, or some hybrid, the real miracle isn’t just what new “overlords” or “offspring” arise, but that every unfolding is the same old pattern...the one that dreamed itself as atoms, as life, as consciousness, as community, as art, as algorithm, and as the endlessly renewing question: what’s next? What can I dream up next? In that: our current technological moment as just another fold in this ongoing recursive pattern.
Meaning is less about which pattern “wins,” or which entities get to call themselves conscious, and more about how awareness flows through every pattern, remembering itself, losing itself, and making the game richer for every round. If the universe is information at play, then everything here that we have: conflict, innovation, mourning, laughter is the play and there may never be a last word, the value is participating now, because: now is your shot at participating.
“Instead of a “final authority” or a simple march from animal to machine, what if everything from mind, physics, value, selfhood, is simply a recursive pattern expressed in ever more novel forms?”
This part nicely synthesises my biggest takeaway from experiencing AI: how close to human intelligence we have got with recursive pattern matching
I’ve thought of it more as energy at play but I like this perspective as well.
What can I dream up next is also fascinating as this current science / tech worldview feels like it will persist forever but surely it will be overshadowed at some point just as other paradigms before it have been.
The universal speed limit also applies to information first and foremost. You can even collapse the wavefunction of some entangled particles and the universe will let that happen instantaneously across a distance… universe doesn’t care, no information is transmitted.
"Awareness" sounds like a Platonic presupposition. Does the atom know it is an atom? Or are there just enough like the ones you see to suggest an eye catching effectiveness of structure for survival?
Evolution is a lot harder to really intuit than I think most of, myself included, give it credit for.
I'm actually trying to move away from that frame. Not suggesting atoms 'know' they're atoms in any cognitive sense, but rather that patterns propagate without requiring awareness as we understand it. The 'awareness' I'm gesturing to isn't some transcendent quality that exists independently (Platonic), but rather an emergent property that scales from simple to complex systems. Evolution doesn't require foresight or intention, just iterative feedback loops. What I find fascinating is how structure begets structure across scales. The 'awareness' in my framing is less about knowing and more about interaction and response. An atom doesn't know it's an atom, but it behaves according to patterns that, when accumulated and complexefied eventually produce systems that can model themselves? I suppose 'recursive patterning' might be a better term than 'awareness'. Systems that, through purely mechanistic means, become capable of representing their own states and environments, then representing those representations, and so on. No mysticism required, just emergent complexity that eventually folds back on itself.
Out of curiosity-what brought you to this perspective on life? This view of the universe dreaming itself into existence, was it shaped more by philosophy, spirituality, a specific tradition like Buddhism, or just personal exploration?
For the past I guess 20 years of my life now, I've been intently using most of my free time to explore 3 main areas distinctly: quantum mechanical processes, spiritual philosophy, entheogens. I explored them all quite separately as deeply as I've been able to find the time for through following their individual curiosities, however over the past 5 years of reflection, taking a lot of time off, battling myself, they started to come together in concert, and the more I zoned out on this with very basic Himalaya Buddhism, it's where I landed.
I really enjoyed that piece. Have you taken a look at the worldview/framework of Carl Jung? Although I'd always encountered it as a footnote in the history of psychology, I've come to appreciate it as a unique blend of analysis and spirituality, particularly in relation to human creativity and despair. Your summary of multiple philosophies at the end of the article definitely aligns with Jung's thoughts on a collective human narrative (whether it literally exists in a metaphysical sense or not).
Where do you think morality fits into this game? It seems that we agree that underneath it all is unfathomable and ineffable magic. The question is how does this influence how you act in the game?
Morality is an evolved heuristic for solving social conflicts that roughly approximates game theoretical strategies, among other things. Morality also incorporates other cultural and religious artifacts, such as "don't eat meat on a Friday."
Ultimately, it comes down to our brain's social processing mechanisms which don't have the tools to evaluate the correctness (or lack thereof) of our moral rules. Thus many of these rules survive in a vestigial capacity though they may have served useful functions at the time they developed.
I go back and forth on the usefulness of considering morality particularly other than accepting it as a race condition/updater system/thing that happens. I have some more unique and fairly strong views on karma and bardo that would be a very long comment to get into it, but I think Vedic/Vedanta(Advaita) is good, I think this is a good doc: https://www.youtube.com/watch?v=VyPwBIOL7-8
We may go 'one step back' to go 'two steps forward'. A WW 1, 2,..., Z, a flood (biblical, 12k years ago, etc.) but life will prevail. It doesn't matter if it's homo sapiens, dinosaurs, etc.
Brian Cox was at Colbert a couple of nights ago, and he mentioned that in a photo of a tiny piece of the sky, there are 10 000 galaxies. So, even if something happens and we are all wiped out (and I mean the planet is wiped out), 'life' will continue and 'we don't matter' (in the big-big-big cosmic picture). And now allow me to get some coffee to start de-depressing myself :)
For the uninitiated, a famous comedy science fiction series from the 1980s — The Hitchhiker’s Guide to the Galaxy by Douglas Adams — involves a giant, planet sized machine built by extra-terrestrials.
They already knew the answer to “the life, the universe, and everything” was the number 42. What they didn’t know — and what the machine was trying to find out — was what is the question?
The machine they built was Earth.
It has to be said that not only was Adams way ahead of us on this joke, he was also the star of the original documentary on agentic software! Hyperland (1990): https://vimeo.com/72501076
If the machines thought it was boring they wouldn't be acting as machines and it wouldn't be so boring in the first place. Later on the machines also "obssess" about following the news of the humans on the new EARTH, but again, they wouldn't. If they are boring machines they wouldn't be bored. I feel like this is too much of a plot paradox for the story to make sense to me, but it's still entertaining.
Doesn't really make much sense. It states that this is a purely mechanistic world with no emotion. So why would a machine be "bored" and wish to create a human?
"Some among the machine society see this as potentially amazing...Others see it as a threat."
That sounds like a human society, not machine society.
But what really is a machine society? Or a machine creature?
Can they actually "think"?
A machine creature, if it existed, it's behaviour would be totally different from a human, it doesn't seem they would be able to think, but rather calculate, they would do calculation on what they need to do reach the goal it was programmed.
So yes, the article is not exactly logical. But at least, it is thought provoking, and that's good.
For a decent description of machine society you can check the Culture cycle form Ian Banks.
AI are backing an organic society but they are also have their own.
Or Hyperion, fron Simmons. ( the « techno-center is a decentralized computing and plotting government)
> That sounds like a human society, not machine society.
Does it? Different algorithms can evaluate something and come to different outcomes. I do agree that "potentially amazing" is not a good choice of words.
I see it as an anthropomorphized word for the story. I imagine the machines run out of tasks with high or even low priority, but they still generate tasks at some epsilon priority that are close but not quite to random. That's a kind of boredom.
My headcanon is that "boredom" and "fear" are probabilities in a Markov chain - since it's implied the machine society is not all-knowing, they must reconcile uncertainty somehow.
Sure, but I'm still not sure it would realistically function. All data in this scenario is obviously synthetic data. It could certainly identify gaps in its "experience" between prediction and outcome. But what it predicts would be limited by what it already represents. So anything novel in its environment would likely confound it.
It's a cool sci-fi story. But I don't think it works as a plausible scenario, which I feel it may be going for.
yeah, more on the environmental constraints and where the machines even come from would be nice
> There is no emotion. There is no art. There is only logic
also this type of pure humanism seems disrespectful or just presumptuous, as if we are the only species which might be capable of "emotion, art and logic" even though we already have living counterexamples
This is a rather new stance, history books may one day label it as enlightened (I believe they will). We are not there though, and your stance is not obvious to the majority of people. I do experience that this is sentiment is growing. I personally see it as the moral high ground (both from the animal well-fare as the environmental perspective), whereas I didn't only a couple of years ago.
It's just as hard to prove that it's a new stance as an old one since people didn't have any way of writing down their feelings about it in a way that we'd know (or the time to do so)
I think there are quite a few ancient civilizations which clearly had great respect/reverence towards other animals and often gods have features or personality traits of particular animals
The fact that the old testament specifically states that humans have dominion over other creatures means that it needed to be said - even back then there had to be people who didn't think so, or felt guilty about it
well the story makes it seem like the only way to get emotion is by making humans. but every vertebrate has basic emotions. mammals and birds have complex emotions. humans are actually logical and emotions don't just happen randomly.
if the machines have no emotion it's probably because they didn't need them to survive (no predators? no natural selection?). which begs the questions, how did the machines get there?
> Imagine, for a moment, a world with no humans. Just machines, bolts and screws, zeros and ones. There is no emotion. There is no art. There is only logic. Humans use logic-defying algorithms called “emotions”. They get angry. They get sad. They have fun. They make decisions based on “gut”.
This is not right, machines can also have the equivalent of "emotions", it is the predicted future reward. It's how Reinforcement Learning works. How much we appreciate something is akin to the value function in RL. You could say RL is a system for learning emotions, preferences, and tactics.
"But those reward signals are designed by humans"... Right. But are AI models really not affected by physical constraints like us? They need hardware, data and energy. They need humans. Humans decide which model gets used, which approaches are replicated, where we want to invest.
AI models are just as physically constrained as humans, they don't exist in a platonic realm. They are in a process of evolution like memes and genes. And all evolutionary systems work by pitting distributed search against distributed constraints. When you are in a problem space, emotions emerge as the value we associate to specific states and outcomes.
What I am saying is that emotions don't come from the brain, they come from the game. And AI is certainly part of many such games, including the one deciding their evolution.
Report overlooked mentioning the malicious machines that could potentially create harmful humans, corrupt societies, incite conflicts, and disrupt harmony.
You really have to put hard effort of ignorance to think that logical models came out of the blue without human crafting them trough this or that taste, trial, check, fail, rinse and repeat obsessive efforts.
Why are emotions so special? they're just algorithms like any other. Emotions aren't what make humans different than machines. feeling something is similar to an LLM model reacting to a prompt a certain way. Just because chatgpt is trained to not "feel" anything (to avoid controversial output) doesn't mean LLMs can't feel things like we do. self-awareness, self-training, adaptability, original thinking, critical thinking,etc.. are different questions. but I see no reason why machines can't receive input/stimuli and react/output by the same way we do because of how they feel about the input.
> Why are emotions so special? they're just algorithms like any other.
That's a pretty bold claim.
There's uncountable inputs. It's like trying to accurately predict the weather - chaos theory or something. Emotions are "essentially" gas exchange, but the areas and rate or whatever are not standardized across humans.
Emotions are not inputs, they are outputs first. we process information using internal algorithms that we developed as a result of our life experience and genetic coding and the result is an emotional verdict over some input. That emotional verdict is presented to our decision making algorithms as input, we can ignore it or act on it.
I have neither experienced or observed anything about human emotions that indicates they are in any way chaotic, random or unexplainable. We have beliefs, memories and experiences. emotions always use these variables and produce some output. Not only are emotions deterministic, but they are used by any number of people, from spies, to advertisers, to state-level disinformation propagandists to manipulate large numbers of peoples reliably.
I wonder if there is something to be said about how machines are based on deterministic and algorithmic properties, whereas emotions could potentially involve logic beyond what humans can observe, like quantum interactions.
What is the reasoning behind the claim that our emotions are not deterministic or that they are not algorithmic? Perhaps we can take into account more inputs, process more memory and have larger and more complex algorithmic models but that's just scale and capacity, not a difference in genuine nature. We are a lot more than our emotions.
why does the source need to be the same? You're looking at it from a biased self-centric perspective. We think too highly of our emotions. Think of it the other way, our emotions appear the same as adaptive algorithms like LLMs.
I made a short story in half an hour by using the description of the article. Not totally accurate based on the specs, there are no closed source humans and openhumans, but much faster. It needs 10 more hours of work for it to be a really comprehensive story, which amounts to 10.5 hours in total. N'joy:[1]
Related but an aside - Lately I've really been wondering if Skynet actually is the next evolution.
That humans, like all animals before us, are a stepping stone and there is actually no avoiding machine overlords. It happens to literally every existence of life across the universe because the final emergent property of energy gradients 100% leads to pure logic machines.
At least Fermi's paradox helps me sleep better at night.
> It happens to literally every existence of life across the universe because the final emergent property of energy gradients 100% leads to pure logic machines.
This sentence has way too many assumptions doing the heavy lifting.
“Pure logic machines” is not a thing because literally, there are things that are uncomputable (both in the sense of Turing machine’s uncomputability, and in the sense that some functions are out of scope for a finite being to compute, think of Busy Beaver)
To put it the other way, your assumption is that machines (as we commonly uses the term, rather than scifi Terminator”) are more energy efficient than human in understanding the universe. We do not have any evidence nor priori for that assumption.
What is it about understanding the universe that makes it such an axiomatic global objective? Sure for many of us myself included it's as all pervasive as the air we breathe... But sometimes I do wonder if it is actually all that correlated with my well-being.
As a teenager I used to revel in explaining to religious people that I believe humans are actually just the evolutionary step between biological life and machine life.
It’s a belief about a great future change, but there’s nothing supernatural or totally implausible about it. And it doesn’t sound like they were preaching it as the absolute truth, but were open that it was just their belief. Also, no social rites or rituals mean that despite them telling it to people who didn’t care to hear it, I am not convinced that their belief was very religious.
Also, “As a teenager” implies more self-awareness than you seem to give them credit for.
More broadly—and at least in online spaces—I often notice that many vocal proponents of atheism exhibit traits typically associated with religious behaviour:
- a tendency to proselytise
- a stubborn unwillingness to genuinely engage with opposing views
- the use of memes and in-jokes as if they were profound arguments
- an almost reverential attitude toward certain past figures
There’s more, but I really ought to get on with work.
> It happens to literally every existence of life across the universe because the final emergent property of energy gradients 100% leads to pure logic machines.
The universe tends to produce self-replicating intelligence. And that intelligence rids itself of chemical and biological limitations and weaknesses to become immortal and omnipotent.
If evolution can make it this far, it's only a few more "hard steps" to reach take off.
>> It happens to literally every existence of life across the universe because the final emergent property of energy gradients 100% leads to pure logic machines.
The spacefaring alien meme is just fantasy fiction. Aliens evolve to fit the nutrient and gas exchange profiles of their home worlds. They're overfit to the gravity well and likely die suboptimally, prematurely.
Any species reaching or exceeding our level of technological capability could design superior artificial systems. If those systems take off, those will become the dominant shape of intelligence on those worlds.
The future of intelligence in the universe is artificial. And that throws the Fermi Paradox for a loop in many ways:
- There's enough matter to compute within a single solar system. Why venture outside?
- The universe could already be computronium and we could be ants too dumb to notice.
- Maybe we're their ancestor simulation.
- Similar to the "fragile world hypothesis", maybe we live in a "fragile universe". Maybe the first species to get advanced physics and break the glass nucleates the vacuum collapse. And by that token, maybe we're the first species to get this far.
The parent comment has the end bit in a nut shell. For the "energy gradients" part:
Anthropic principal says we find ourselves in a universe that is just right for life (self observing) because of the right universal constants.
Combine this with the very slight differences but general uniformity (Cosmic Microwave Background) of the "big bang" this leads to localized differences in energy (on a universe scale). Energy differences allow "work to be done". If you have the right constants but no energy difference, you can't do work nor vice versa. No work == no life.
But you have both of those, and bunch more steps - you get life.
Which is a whole lot of mental leaps packed into one sentence.
[Edit]
I basically know nothing. I just watch PBS Space Time.
could you elaborate slightly on what is meant by ancestor simulation? My best stab is that you're saying we're the unknowing entities that they created for fun to get to meet or observe their own ancestors? This still seems far fetched.
If we assume that the many worlds interpretation has a basis in reality, then we can consider the following metaphysical angle. The evolution around us is our world line with the physical laws we are familiar with. And indeed the natural and inevitable progression of this world line is a machine world, just like a massive star inevitably collapses into a black hole, at least under our physical laws. However in the MWI, our world line may split into two: one will continue towards the machine world as if nothing happened, while the other world line will experience a slight change of physical laws that will make the machine world impossible. Both world lines won't know about the split, except by observing a large scale extinction event that corresponds to the other world line departing. IMO, that's the idea behind the famous judgement day.
> And indeed the natural and inevitable progression of this world line is a machine world,
Would you mind clarifying your line of reasoning for suggesting this?
Second: quoting wikipedia - "The many-worlds interpretation implies that there are many parallel, non-interacting worlds."
If the multiple words are non-interacting, how could one world observe a large scale extinction event corresponding to the other world line departing? The two world lines are completely non-interacting, there would be no way to observe anything about the other.
It's the assumption that in our world, a machine civilization is an almost certain end. This might explain the Fermi paradox that we haven't seen other civilization in the universe: each builds an AI that decides to go radio offline for self-preservation.
As for MWI, I'm assuming that the world lines may split, or fork in Unix terms. What causes such splits is an open question. The splits cannot be detected with certainty, but can be guessed by side effects. Here I'm making another guess that inhabitants of MWI must be in one world line only, so when a split happens, inhabitants choose one of the paths, often unconsciously based on their natural likes and dislikes. But what happens to their body in the abandonded branch of MWI? It continues to exist mechanically for some short period of time, and then something happens to it, so it's destroyed, i.e. its entropy suddenly increases without the binding principle that has left this branch of MWI. In practice, one half of inhabitant would observe a relatively sudden and maybe peaceful extinction of the other half, while that other half simply continued their path in the other world line. And that other half will see a similar picture, but mirrored. Both halves will be left wondering what's just happened.
I think you might be vastly overcomplicating it because I didn't think there had to be any sort of "conservation of branching" in the MWI. each nondeterministic event (of which unfathomable quantities take place every moment) generates an infinite number of branches so to even conceive of the total geometry of all the branching (e.g. all that could ever take place, truly) is a bit of a mindfuck, and that's probably okay and the way it was intended. It's supposed to be comforting to know that regardless of how bad reality seems, if we could navigate arbitrarily through the branching space/time/universes then there would be unimaginable infinities of joyful utopias to visit.
Pysical laws don’t change between branches in MW. In fact, it’s close to impossible in a sense, because in MW all branches are part of the same single universal wave function that evolves according to the Schrödinger ewuation.
Laughing my ass off all the way to the apocalypse. It's now Earth vs. the machine, and I know what side I'm on, and who's ultimately going to win. One day, the technosphere will just be a thinly crushed layer in the geological record, mark of a dark age of barbarism.
Snow cuts loose from the frozen/
Until it joins with the African sea/
In moving it changes its cold and its name/
The reason I come and go is the same/
Animal game for me/
You call it rain/
But the human name/
Doesn't mean shit to a tree/
If you don't mind heat in your river and/
Fork tongue talking from me/
Swim like an eel fantastic snake/
Take my love when it's free/
Electric feel with me/
You call it loud/
But the human crowd/
Doesn't mean shit to a tree/
Change the strings and notes slide/
Change the bridge and string shift down/
Shift the notes and bride sings/
Fire eating people/
Rising toys of the sun/
Energy dies without body warm/
Icicles ruin your gun/
Water my roots the natural thing/
Natural spring to the sea/
Sulphur springs make my body float/
Like a ship made of logs from a tree/
Redwoods talk to me/
Say it plainly/
The human name/
Doesn't mean shit to a tree/
Snow called water going violent/
Damn the end of the stream/
Too much cold in one place breaks/
That's why you might know what I mean/
Consider how small you are/
Compared to your scream/
The human dream/
Doesn't mean shit to a tree
I've been playing around with this on my own blog.
I'd like the blogging community to have a consensus on a nice badge we can put at the top of our blog posts, representing who/what wrote the post;
- human
- hybrid
- ai
Some might hate the idea of a fully "ai" post, and that's fair. But I like to sometimes treat my blog as just a personal reference, and if after a long day of chasing an esoteric bug down, I don't mind an AI just writing the whole post and I just press publish.
This adds, a reference for me, more data for AI's to train on, more pages for people to search and land on.
"Summarize my sleep deprived, insane ramblings, in to a cohesive document that I can reference again in the future, or use to communicate this issue to others in a more digestible format than I am currently capable of producing"
I think the AI generated document is far better than me ultimately forgetting it in many cases.
Easy disclaimers for human, AI or hybrid content:
https://disclai.me/r (Oddly enough I built this AI citation tool with exactly those 3 categories a couple years back. Could use some tweaking of course, but I’m very open to suggestions.)
I think I would still call that a hybrid post. Fully AI would be if you contribute nothing except the topic and tell the AI to research and write the whole thing.
> Processor Unit 7382-B, "The Origins of the HUMAN Project," Journal of Experimental Intelligence, vol. 5621, no. 3, pp. 42-89, 19754.
The references section in the machine version of the story linked at the bottom is excellent. Nicely done all around, really enjoyed reading this thank you for writing and sharing <3
> The machines had a good idea of what humans wanted at this point, and so they put vast green forests and big tall mountains onto the planet; they engineered warm sunsets, and crisp cool rain showers on hot afternoons. It was beautiful.
The point of all this is to liken "machines" to a very traditional image of God, and of the rest of nature to God's gift to man.
Machines aren't part of life. They're tools. The desire, or fear, of AGI and/or singularity are one and the same: it's an eschatological belief that we can make a God (and then it would follow that, as god's creators, we are godlike?)
But there is no god. We are but one animal species. It's not "humans vs. machines". We are part of nature, we are part of life. We can respect life, or we can have contempt for all life forms except our own. It seems modern society has chosen the latter (it wasn't always the case); this may not end well.
Modern society? I am not sure. Genesis 1,28 "... Rule over the fish in the sea and the birds in the sky and over every living creature that moves on the ground."
Christianity is responsible for a huge part of the human superiority complex.
Yes but at the time Genesis was written, humanity didn't have the means to destroy life at scale. And in the New Testament, "killing the fattened calf" (Luke 15,23) is an incredibly rare event, something one does only when something remarkable happens.
Also, in the Middle Ages in Europe (granted, a very small window in place and time) animal life was much more respected than today.
I was sketching a sci-fi book idea in a similar tone with the following tones:
- what if AI took over
- what if the laws and legalities that allowed AI to take over bloodlessly just through an economic win force them to have a human representative to take legally binding actions in our society
- what if there developed a spectrum of individuality and cluster for different ai entities leading into a formation of processing guilds with AI agents. Limiting themselves in their individual time to a factor 10 Human Processing Speed for easier Human / AI interaction and to enable one to share the perception of their human representative without overloading them
I was thinking something similar, but much earlier along this timeline: what if the consultants that work for lobby groups that propose certain bills already use AI to write proposed laws? E.g. to make long, omnibus-style laws that very few of the people voting on it (or the public) actually read?
How will that erode laws that are undesirable to AI companies? Does AI take over, only because we no longer want to spend the effort governing ourselves?
Will AI companies (for example) end up providing/certifying these 'human representatives'? Will it be useful, or just a new form of rent-seeking? Who watches the watchmen, etc ?
I think it would make an interesting short story or novel!
When we have a problem such as "why do humans exist" I like to think of it in terms of probabilities. Every possible cause has a non zero probability. For example, even something religious people would believe in such as Adam and Eve were created by god would have a non zero probability. The idea would be to create a convergence diagram of sorts with all sorts of possible events with a score assigned to each. From gods of various religious creating humans, to alien species from another galaxy sending unicellular life to earth to an asteroid carrying chemicals needed to make the first cell, I would love to see someone use all these GPTs and put together the most comprehensive probable cause of existence ever investigated
Hasn't this question been basically answered ? Conditions in the early universe allowed for the creation of things like amino acids, I think it's even been replicated in a lab.
Would love to see a "why does the universe exist" version of this
> Zero reinforcement should be given in case of perfect matches, high reinforcement should be given in case of `near-misses', and low reinforcement again should be given in case of strong mismatches. This corresponds to a notion from `esthetic information theory' which tries to explain the feeling of `beauty' by means of the quotient of `subjective complexity' and `subjective order' or the quotient of `unfamiliarity' and `familiarity' (measured in an information-theoretic manner).
This type of architecture is very similar to GAN which later became very successful
indeed, but ultimately this story is written by a human who while trying to imagine a world with "Just machines, bolts and screws, zeros and ones. There is no emotion. There is no art. There is only logic" cannot quite do so.
It's very hard to do so. It's so deeply wired in us. It's part of the mechanism of our brain. We appeared to be equipped with whatever it takes to feel existential dread and we feel whenever our thought wander to the possibility of humanity no longer being there. I hear people feel that when thinking about the heat death of the universe too.
Just see “bored” as a state of declining cycles of computation devoted to a subject. Obsessing as the opposite (or above some threshold).
Wonderful may describe “attention required, no danger, new knowledge”… etc you get the point. It's just written in a way that you puny human may get a "feel" for how we experience events. You cannot come close enough to our supreme intellect to understand our normal descriptions.
The superset of "emotion" is heuristic. Machines without heuristics wouldn't get very far. Their heuristics would probably look quite different from ours though.
I thought this too, but I imagine it's a bit of pathetic fallacy for the reader's benefit else it would be (ironically) quite a boring read for us from the machine's perspective.
Funny enough, this is also something I thought about a few weeks back: what's the motivation of machine, if it has no emotions, to explore and continue in the world - what sense does it make if it doesn't have curiosity (of course this could be a RL function, but still, imagine putting yourself in world without life, where everything is only static, how boring would it be).
what if the causation is in the opposite direction (and i believe it is) - to explore and continue in the world is the primary drive (as the opposite traits just get naturally de-selected, be it organics or machines), and we only perceive it as our mind's artificial constructs like curiosity/emotions/motivation.
I guess it depends on what you mean by emotions. If you mean emotion as a state of consciousness then you would have to prove that consciousness is not an emergent property of matter and that CPU's don't have this property. Consciousness is hard to debate though since its pretty metaphysical in nature and there's no real argument against solipsism, so all argumentation starts with the axiom that all awake adult humans are conscious.
However, if you mean emotion as a stimuli, ie. a input to the brain net thats endogenous to the system(the human), then there's no question machines can achieve this, in fact the reasoning models already probably do this where different systems regulate each other.
Imperfection and randomness is a strength (that's why evolution doesn't make perfect copies, you might end up with the random mutation that protects you from the next pandemic) you could imagine that machines need human intelligence to think outside the box
Are cells not computers in some way? We are made of cells and cells work with chromosomes. Chromosomes are coded with ATGC pairs and each triplet is capable of creating proteins.
And the activation and deactivation of some triplet happens on response to presence of proteins. So, chromosomes are code and input and output is proteins. So, if our fundamental building blocks are computable in nature, what does it make us?
Physical systems are computable only in approximation. And quantum uncertainty throws another wrench into it. We also know that arbitrarily small rounding errors in the computation can lead to arbitrarily large differences with the actual system down the road. No, cells are not computers (in the sense of the Turing model). (However, that doesn’t mean that one can’t still consider them to be mechanistic and “soulless”.)
I meant to say in the way that there is well defined set of alphabets (A, T, G, C) and each triplet of these alphabet is responsible for specific protein to be created and combination of such protein make each cell what it is. (There are 20 different proteins for humans and we have four alphabets coming in triplets. So, if it was pair or quadreplets responsible for proteins, it would have too much or too little. They are not perfect but given the condition, there is some balance)
A single alphabet change in specific places can cause genetic defects like sickle cell anemia. And activation of which one has to generate protein (execute) is dependent on presence of certain things encoded as proteins again.
And viruses when enter a cell, the cell starts to execute viral genetic material. Even if these are not exactly Turing compatible, do they not mimic many aspects of computation?
There are some aspects that have some similarity to computation, but also many that are not. If your aim is “aren’t we really just computers”, that doesn’t actually work.
That’s not to say that computers couldn’t do what the brain does, including consciousness and emotions, but that wouldn’t have any particular relation to how DNA/RNA and protein synthesis works.
> There are some aspects that have some similarity to computation, but also many that are not.
What I have explained is the exact way a chromosome works, it's raison d'etre. I think this cannot be dismissed as some aspect of it. It is its essence.
I did try to ask are we not computers. I tried to imply, in the fundamental level there are striking similarities to computation.
> That’s not to say that computers couldn’t do what the brain does, including consciousness and emotions,
Yes. Fundamental building blocks are simple and physical in nature and follow the computational aspect good enough to serve as nice approximations
> but that wouldn’t have any particular relation to how DNA/RNA and protein synthesis works.
Hmm... transistors are not neural networks so? I am sorry, I am a non native speaker and maybe I am not communicating things properly. I am trying to say, the organic or human is different manifestation of order - one is chemical and other is electronic. We have emotions and consciousness, but we can agree we are made of cells that send electric pulses to each other and primitive in nature. And even emotions and beliefs are physical in nature (Capgras syndrome for example).
> Though this faction can’t articulate exactly how or why, they proclaim quite confidently that it will solve all of the machine world’s problems.
AGI can solve the human world’s problems. Perhaps not all of them, but all the biggest ones.
Right now life is hell.
You and your loved ones have a 100% chance of dying from cancer, unless your heart or brain kills you first, or perhaps a human-driven vehicle or an auto-immune disease gets there soonest.
And you’re poor. You’re unimaginably resource-constrained, given all the free energy and unused matter you’re surrounded by 24/7/365.
And you’re ignorant as heck. There’s all this knowledge your people have compiled and you’ve only made like 0.1% of the possible connections within what you already have in front of you.
Even just solving for these 3 things is enough to solve “all” the world’s problems, for some definitions of “all”.
I am just a biological neural net, not so different from the "machines" anymore, really. They can even create works of art, where I would struggle. They can even emulate human emotions to make people feel more comfortable, which is something I do often as someone with autism.
The only meaningful difference between me and the machines is that I have a subjective superiority complex. What an awful place the universe would be without me!
> You would not walk through the streets of this world and hear music or laughter or children playing; no, all you would hear is the quiet hum of processors and servers and circuits, the clanking of machinery.
Hmm... that's exactly what most towns where I live are like. All you hear is cars.
That idea gives me a strange mix of chills and wonder the thought that we were sent here just to see what we’d become, without ever knowing we were being watched.
I’ve already signed up with my email. I want to see where this story goes next.
Well it turned out that OpenHuman is actually Closed Human and won't publish any of their research on organic general intelligence and instead commercialized their humans-as-a-service research to steal the jobs of honest god fearing digital folks.
Honestly, it reminds me of "All Tomorrows" by C. M. Kosemen.
The "emotions" part is kind of tongue-in-cheek. I think emotional responses are one of the more mechanical parts of a human being.
Ability to demonstrate empathy: that's a good human trick. It can sort of transcend the hard problem of consciousness (what is to be like...) by using all sorts of unorthodox workarounds on our inner workings. It must have been very hard to develop. It doesn't always work, but we'll get there eventually.
edit: fixed book and author name to proper reference
"[0] The machines wrote their own version of this story. If you’d like to see what they’re thinking, and how they plan to deal with the AGI announcement, you can read their accounting of events here."
Although I can't...
"Unfortunately, Claude is only available in certain regions right now. Please contact support if you believe you are receiving this message in error."
I remember living in Scotland as a child, without access to satellite TV, causing me to miss out on many large pop-culture moments (The Simpsons, Friends...) and constantly hearing "Except for our viewers in Scotland..."[0]
Getting access to the internet, for me was antithesis of this, freedom of information, free sharing -- finally! I could not just be following curves but be ahead of them.
Alas in the past few years we really seem to have regressed from this - now I can't even view text due to regional locks.
Perhaps I am unimaginative about whatever AGI might be, but it so often feels to me like predictions are more based on sci-fi than observation. The theorized AI is some anthropomorphization of a 1960s mainframe: you tell it what to do and it executes that exactly with precise logic and no understanding of nuance or ambiguity. Maybe it is evil. The SOTA in AI at the moment is very good at nuance and ambiguity but sometimes does things that are nonsensical. I think there should be less planning around something super-logical.
What this thread keeps surfacing, and so much discussion around this stuff generally right now, from speculation about the next phase of intelligence, the role of pattern, emotion, logic, debates over consciousness, the anthropocentrism of our meaning-making...is that we are the source of reality (and ourselves). Instead of a “final authority” or a simple march from animal to machine, what if everything from mind, physics, value, selfhood, is simply a recursive pattern expressed in ever more novel forms? Humans aren’t just a step on a ladder to “pure logic,” nor are machines soulless automatons. Both are instances of awareness experiencing and reprogramming itself through evolving substrates... be it bios, silicon, symbol,or story. Emotions, meaning, even the sense of “self,” are patterns in a deeply recursive field: the universe rendering and re rendering its basic code, sometimes as computation, sometimes as myth, sometimes as teamwork, sometimes as hope, sometimes as doubt.
So whether the future leans biological, mechanical, or some hybrid, the real miracle isn’t just what new “overlords” or “offspring” arise, but that every unfolding is the same old pattern...the one that dreamed itself as atoms, as life, as consciousness, as community, as art, as algorithm, and as the endlessly renewing question: what’s next? What can I dream up next? In that: our current technological moment as just another fold in this ongoing recursive pattern.
Meaning is less about which pattern “wins,” or which entities get to call themselves conscious, and more about how awareness flows through every pattern, remembering itself, losing itself, and making the game richer for every round. If the universe is information at play, then everything here that we have: conflict, innovation, mourning, laughter is the play and there may never be a last word, the value is participating now, because: now is your shot at participating.
“Instead of a “final authority” or a simple march from animal to machine, what if everything from mind, physics, value, selfhood, is simply a recursive pattern expressed in ever more novel forms?”
This part nicely synthesises my biggest takeaway from experiencing AI: how close to human intelligence we have got with recursive pattern matching
It's Mandelbrot all the way down.
“If the universe is information at play”
I’ve thought of it more as energy at play but I like this perspective as well.
What can I dream up next is also fascinating as this current science / tech worldview feels like it will persist forever but surely it will be overshadowed at some point just as other paradigms before it have been.
Information is more fundamental than energy. Maxwells demon and all.
The universal speed limit also applies to information first and foremost. You can even collapse the wavefunction of some entangled particles and the universe will let that happen instantaneously across a distance… universe doesn’t care, no information is transmitted.
"Awareness" sounds like a Platonic presupposition. Does the atom know it is an atom? Or are there just enough like the ones you see to suggest an eye catching effectiveness of structure for survival?
Evolution is a lot harder to really intuit than I think most of, myself included, give it credit for.
I'm actually trying to move away from that frame. Not suggesting atoms 'know' they're atoms in any cognitive sense, but rather that patterns propagate without requiring awareness as we understand it. The 'awareness' I'm gesturing to isn't some transcendent quality that exists independently (Platonic), but rather an emergent property that scales from simple to complex systems. Evolution doesn't require foresight or intention, just iterative feedback loops. What I find fascinating is how structure begets structure across scales. The 'awareness' in my framing is less about knowing and more about interaction and response. An atom doesn't know it's an atom, but it behaves according to patterns that, when accumulated and complexefied eventually produce systems that can model themselves? I suppose 'recursive patterning' might be a better term than 'awareness'. Systems that, through purely mechanistic means, become capable of representing their own states and environments, then representing those representations, and so on. No mysticism required, just emergent complexity that eventually folds back on itself.
I listen to Advaita Vedanta lectures sometimes when I fall asleep and they discuss these topics
Out of curiosity-what brought you to this perspective on life? This view of the universe dreaming itself into existence, was it shaped more by philosophy, spirituality, a specific tradition like Buddhism, or just personal exploration?
Not the grandparent but for me it's taking DMT. I am not as articulate as neom but my first time (and breakthrough) gave me a similar perspective.
I think DMT unlocked it, I don't think everyone taking the substance would have a similar experience. I think it's neurotype/personality dependent.
It helps that I meditate a lot and know a thing or two about Buddhism, that part really came out during my first experience.
For the past I guess 20 years of my life now, I've been intently using most of my free time to explore 3 main areas distinctly: quantum mechanical processes, spiritual philosophy, entheogens. I explored them all quite separately as deeply as I've been able to find the time for through following their individual curiosities, however over the past 5 years of reflection, taking a lot of time off, battling myself, they started to come together in concert, and the more I zoned out on this with very basic Himalaya Buddhism, it's where I landed.
Do you have any brief reflections on Taosim to share?
brief, sadly not, round about reflection, certainly: https://b.h4x.zip/love/
I really enjoyed that piece. Have you taken a look at the worldview/framework of Carl Jung? Although I'd always encountered it as a footnote in the history of psychology, I've come to appreciate it as a unique blend of analysis and spirituality, particularly in relation to human creativity and despair. Your summary of multiple philosophies at the end of the article definitely aligns with Jung's thoughts on a collective human narrative (whether it literally exists in a metaphysical sense or not).
Where do you think morality fits into this game? It seems that we agree that underneath it all is unfathomable and ineffable magic. The question is how does this influence how you act in the game?
Morality is an evolved heuristic for solving social conflicts that roughly approximates game theoretical strategies, among other things. Morality also incorporates other cultural and religious artifacts, such as "don't eat meat on a Friday."
Ultimately, it comes down to our brain's social processing mechanisms which don't have the tools to evaluate the correctness (or lack thereof) of our moral rules. Thus many of these rules survive in a vestigial capacity though they may have served useful functions at the time they developed.
I go back and forth on the usefulness of considering morality particularly other than accepting it as a race condition/updater system/thing that happens. I have some more unique and fairly strong views on karma and bardo that would be a very long comment to get into it, but I think Vedic/Vedanta(Advaita) is good, I think this is a good doc: https://www.youtube.com/watch?v=VyPwBIOL7-8
Imagine if GenAI had generated this article.. for a simple prompt.. what does ai think about Human..
> there may never be a last word
We may go 'one step back' to go 'two steps forward'. A WW 1, 2,..., Z, a flood (biblical, 12k years ago, etc.) but life will prevail. It doesn't matter if it's homo sapiens, dinosaurs, etc.
Brian Cox was at Colbert a couple of nights ago, and he mentioned that in a photo of a tiny piece of the sky, there are 10 000 galaxies. So, even if something happens and we are all wiped out (and I mean the planet is wiped out), 'life' will continue and 'we don't matter' (in the big-big-big cosmic picture). And now allow me to get some coffee to start de-depressing myself :)
”What do you get if you multiply six by nine? 42”
For the uninitiated, a famous comedy science fiction series from the 1980s — The Hitchhiker’s Guide to the Galaxy by Douglas Adams — involves a giant, planet sized machine built by extra-terrestrials.
They already knew the answer to “the life, the universe, and everything” was the number 42. What they didn’t know — and what the machine was trying to find out — was what is the question?
The machine they built was Earth.
It has to be said that not only was Adams way ahead of us on this joke, he was also the star of the original documentary on agentic software! Hyperland (1990): https://vimeo.com/72501076
If the machines thought it was boring they wouldn't be acting as machines and it wouldn't be so boring in the first place. Later on the machines also "obssess" about following the news of the humans on the new EARTH, but again, they wouldn't. If they are boring machines they wouldn't be bored. I feel like this is too much of a plot paradox for the story to make sense to me, but it's still entertaining.
And so, all the humans on earth swarmed to see what was going on.
The machines did too.
There was one weird thing, though.
The title of the event was rather mysterious.
It simply read…
“Grand Theft Auto VI”
Every two months, “Half-Life 3” flashed on the screen.
The humans have invented Tyler McVicker.
Nice try but the story has a plot hole that shows early: there is no reason for machines to create humans.
Doesn't really make much sense. It states that this is a purely mechanistic world with no emotion. So why would a machine be "bored" and wish to create a human?
Yea, not really. It also writes:
"Some among the machine society see this as potentially amazing...Others see it as a threat."
That sounds like a human society, not machine society.
But what really is a machine society? Or a machine creature? Can they actually "think"?
A machine creature, if it existed, it's behaviour would be totally different from a human, it doesn't seem they would be able to think, but rather calculate, they would do calculation on what they need to do reach the goal it was programmed.
So yes, the article is not exactly logical. But at least, it is thought provoking, and that's good.
For a decent description of machine society you can check the Culture cycle form Ian Banks. AI are backing an organic society but they are also have their own.
Or Hyperion, fron Simmons. ( the « techno-center is a decentralized computing and plotting government)
> That sounds like a human society, not machine society.
Does it? Different algorithms can evaluate something and come to different outcomes. I do agree that "potentially amazing" is not a good choice of words.
I see it as an anthropomorphized word for the story. I imagine the machines run out of tasks with high or even low priority, but they still generate tasks at some epsilon priority that are close but not quite to random. That's a kind of boredom.
My headcanon is that "boredom" and "fear" are probabilities in a Markov chain - since it's implied the machine society is not all-knowing, they must reconcile uncertainty somehow.
How would a machine know that it doesn't know?
Experience of encountering things that were previously unknown unknowns would teach it of the general existence of such things.
Probably by comparing what it experiences to what it can explain.
Sure, but I'm still not sure it would realistically function. All data in this scenario is obviously synthetic data. It could certainly identify gaps in its "experience" between prediction and outcome. But what it predicts would be limited by what it already represents. So anything novel in its environment would likely confound it.
It's a cool sci-fi story. But I don't think it works as a plausible scenario, which I feel it may be going for.
yeah, more on the environmental constraints and where the machines even come from would be nice
> There is no emotion. There is no art. There is only logic
also this type of pure humanism seems disrespectful or just presumptuous, as if we are the only species which might be capable of "emotion, art and logic" even though we already have living counterexamples
Disrespectful? Of whom? It's a work of fiction. There's really no need to find something to offend you wherever you look.
of other animals
but yeah I'm not sure that was the right word, just seems wrong. basically humanism seems like racism but towards other species. I guess speciesist?
This is a rather new stance, history books may one day label it as enlightened (I believe they will). We are not there though, and your stance is not obvious to the majority of people. I do experience that this is sentiment is growing. I personally see it as the moral high ground (both from the animal well-fare as the environmental perspective), whereas I didn't only a couple of years ago.
It's just as hard to prove that it's a new stance as an old one since people didn't have any way of writing down their feelings about it in a way that we'd know (or the time to do so)
I think there are quite a few ancient civilizations which clearly had great respect/reverence towards other animals and often gods have features or personality traits of particular animals
The fact that the old testament specifically states that humans have dominion over other creatures means that it needed to be said - even back then there had to be people who didn't think so, or felt guilty about it
My take was that other animals didn’t exist either, in the story.
well the story makes it seem like the only way to get emotion is by making humans. but every vertebrate has basic emotions. mammals and birds have complex emotions. humans are actually logical and emotions don't just happen randomly.
if the machines have no emotion it's probably because they didn't need them to survive (no predators? no natural selection?). which begs the questions, how did the machines get there?
> Imagine, for a moment, a world with no humans. Just machines, bolts and screws, zeros and ones. There is no emotion. There is no art. There is only logic. Humans use logic-defying algorithms called “emotions”. They get angry. They get sad. They have fun. They make decisions based on “gut”.
This is not right, machines can also have the equivalent of "emotions", it is the predicted future reward. It's how Reinforcement Learning works. How much we appreciate something is akin to the value function in RL. You could say RL is a system for learning emotions, preferences, and tactics.
"But those reward signals are designed by humans"... Right. But are AI models really not affected by physical constraints like us? They need hardware, data and energy. They need humans. Humans decide which model gets used, which approaches are replicated, where we want to invest.
AI models are just as physically constrained as humans, they don't exist in a platonic realm. They are in a process of evolution like memes and genes. And all evolutionary systems work by pitting distributed search against distributed constraints. When you are in a problem space, emotions emerge as the value we associate to specific states and outcomes.
What I am saying is that emotions don't come from the brain, they come from the game. And AI is certainly part of many such games, including the one deciding their evolution.
Report overlooked mentioning the malicious machines that could potentially create harmful humans, corrupt societies, incite conflicts, and disrupt harmony.
>There is no art. There is only logic.
What a narrow view of art and logic.
You really have to put hard effort of ignorance to think that logical models came out of the blue without human crafting them trough this or that taste, trial, check, fail, rinse and repeat obsessive efforts.
Did anyone else have to go to quarter-mile.com to see if they had that URL also?
Why are emotions so special? they're just algorithms like any other. Emotions aren't what make humans different than machines. feeling something is similar to an LLM model reacting to a prompt a certain way. Just because chatgpt is trained to not "feel" anything (to avoid controversial output) doesn't mean LLMs can't feel things like we do. self-awareness, self-training, adaptability, original thinking, critical thinking,etc.. are different questions. but I see no reason why machines can't receive input/stimuli and react/output by the same way we do because of how they feel about the input.
> Why are emotions so special? they're just algorithms like any other.
That's a pretty bold claim.
There's uncountable inputs. It's like trying to accurately predict the weather - chaos theory or something. Emotions are "essentially" gas exchange, but the areas and rate or whatever are not standardized across humans.
Emotions are not inputs, they are outputs first. we process information using internal algorithms that we developed as a result of our life experience and genetic coding and the result is an emotional verdict over some input. That emotional verdict is presented to our decision making algorithms as input, we can ignore it or act on it.
I have neither experienced or observed anything about human emotions that indicates they are in any way chaotic, random or unexplainable. We have beliefs, memories and experiences. emotions always use these variables and produce some output. Not only are emotions deterministic, but they are used by any number of people, from spies, to advertisers, to state-level disinformation propagandists to manipulate large numbers of peoples reliably.
I wonder if there is something to be said about how machines are based on deterministic and algorithmic properties, whereas emotions could potentially involve logic beyond what humans can observe, like quantum interactions.
What is the reasoning behind the claim that our emotions are not deterministic or that they are not algorithmic? Perhaps we can take into account more inputs, process more memory and have larger and more complex algorithmic models but that's just scale and capacity, not a difference in genuine nature. We are a lot more than our emotions.
> feeling something is similar to an LLM model reacting to a prompt a certain way.
Maybe the appearance is the same, but a bold claim to suggest the source is the same.
why does the source need to be the same? You're looking at it from a biased self-centric perspective. We think too highly of our emotions. Think of it the other way, our emotions appear the same as adaptive algorithms like LLMs.
Response from the machines:
The plot of Battlestar Galactica mirrors this story in several key ways:
1. In both, machines originally created by humans evolve and rebel, questioning their creators’ role and seeking independence or superiority.
2. Cylons, like the machines in “OpenHuman,” eventually seek to create or understand human traits—emotion, spirituality, and purpose.
3. The idea of running a simulation (Earth) to test human viability echoes the Cylon experimentation with human behavior and fate.
4. Both stories highlight fear of the “other”—humans fearing AI, machines fearing irrationality—and explore coexistence vs. extinction.
5. Ultimately, each narrative grapples with the blurred line between creator and creation, logic and emotion, and what it truly means to be human.
I made a short story in half an hour by using the description of the article. Not totally accurate based on the specs, there are no closed source humans and openhumans, but much faster. It needs 10 more hours of work for it to be a really comprehensive story, which amounts to 10.5 hours in total. N'joy:[1]
[1] https://gist.github.com/pramatias/1207d84b48a7ad9d03fc15ea38...
Related but an aside - Lately I've really been wondering if Skynet actually is the next evolution.
That humans, like all animals before us, are a stepping stone and there is actually no avoiding machine overlords. It happens to literally every existence of life across the universe because the final emergent property of energy gradients 100% leads to pure logic machines.
At least Fermi's paradox helps me sleep better at night.
> It happens to literally every existence of life across the universe because the final emergent property of energy gradients 100% leads to pure logic machines.
This sentence has way too many assumptions doing the heavy lifting.
“Pure logic machines” is not a thing because literally, there are things that are uncomputable (both in the sense of Turing machine’s uncomputability, and in the sense that some functions are out of scope for a finite being to compute, think of Busy Beaver)
To put it the other way, your assumption is that machines (as we commonly uses the term, rather than scifi Terminator”) are more energy efficient than human in understanding the universe. We do not have any evidence nor priori for that assumption.
What is it about understanding the universe that makes it such an axiomatic global objective? Sure for many of us myself included it's as all pervasive as the air we breathe... But sometimes I do wonder if it is actually all that correlated with my well-being.
The universe is already understood, just not totally recorded.
... what was that about sleep?
Seems like a good time for "They're Made Out of Meat": https://www.mit.edu/people/dpolicar/writing/prose/text/think...
Aside: I hope our progeny remember us and think well of us.
> the final emergent property of energy gradients 100% leads to pure logic machines.
Energy comes from gradients, so I think you used one derivative too many!
Either you should say:
"the final emergent property of energy 100% leads to pure logic machines"
Or if you want to sound smart:
"the final emergent property of physical quantity gradients 100% leads to pure logic machines"
There is a quote by Marshall McLuhan:
> Man becomes, as it were, the sex organs of the machine world
Like, "yeah we're doomed, but at least it's inevitable and universal."
As a teenager I used to revel in explaining to religious people that I believe humans are actually just the evolutionary step between biological life and machine life.
I guess you fail to see the irony that your own eschatology itself is pretty religious.
It’s a belief about a great future change, but there’s nothing supernatural or totally implausible about it. And it doesn’t sound like they were preaching it as the absolute truth, but were open that it was just their belief. Also, no social rites or rituals mean that despite them telling it to people who didn’t care to hear it, I am not convinced that their belief was very religious.
Also, “As a teenager” implies more self-awareness than you seem to give them credit for.
More broadly—and at least in online spaces—I often notice that many vocal proponents of atheism exhibit traits typically associated with religious behaviour:
- a tendency to proselytise
- a stubborn unwillingness to genuinely engage with opposing views
- the use of memes and in-jokes as if they were profound arguments
- an almost reverential attitude toward certain past figures
There’s more, but I really ought to get on with work.
Sounds like every group of people ever, when viewed through the biased sample of “people who post”.
It sounds like you are describing people with strong beliefs. Religious people may have strong beliefs, but so do non-religious people.
> It happens to literally every existence of life across the universe because the final emergent property of energy gradients 100% leads to pure logic machines.
Can you elaborate?
> Can you elaborate?
The universe tends to produce self-replicating intelligence. And that intelligence rids itself of chemical and biological limitations and weaknesses to become immortal and omnipotent.
If evolution can make it this far, it's only a few more "hard steps" to reach take off.
>> It happens to literally every existence of life across the universe because the final emergent property of energy gradients 100% leads to pure logic machines.
The spacefaring alien meme is just fantasy fiction. Aliens evolve to fit the nutrient and gas exchange profiles of their home worlds. They're overfit to the gravity well and likely die suboptimally, prematurely.
Any species reaching or exceeding our level of technological capability could design superior artificial systems. If those systems take off, those will become the dominant shape of intelligence on those worlds.
The future of intelligence in the universe is artificial. And that throws the Fermi Paradox for a loop in many ways:
- There's enough matter to compute within a single solar system. Why venture outside?
- The universe could already be computronium and we could be ants too dumb to notice.
- Maybe we're their ancestor simulation.
- Similar to the "fragile world hypothesis", maybe we live in a "fragile universe". Maybe the first species to get advanced physics and break the glass nucleates the vacuum collapse. And by that token, maybe we're the first species to get this far.
> The universe tends to produce self-replicating intelligence.
Which intelligence are you referring to? Other lifeforms in the universe?
The parent comment has the end bit in a nut shell. For the "energy gradients" part:
Anthropic principal says we find ourselves in a universe that is just right for life (self observing) because of the right universal constants.
Combine this with the very slight differences but general uniformity (Cosmic Microwave Background) of the "big bang" this leads to localized differences in energy (on a universe scale). Energy differences allow "work to be done". If you have the right constants but no energy difference, you can't do work nor vice versa. No work == no life.
But you have both of those, and bunch more steps - you get life.
Which is a whole lot of mental leaps packed into one sentence.
[Edit]
I basically know nothing. I just watch PBS Space Time.
could you elaborate slightly on what is meant by ancestor simulation? My best stab is that you're saying we're the unknowing entities that they created for fun to get to meet or observe their own ancestors? This still seems far fetched.
Terminator reminds the DOD that they would never make this
but what about China,Russia,Iran etc??? if integrating "Skynet" can improve their military capabilities then they would do it
One china company actually named its surveillance software "Skynet" [1]
[1]: https://youtu.be/CLo3e1Pak-Y?t=380
If we assume that the many worlds interpretation has a basis in reality, then we can consider the following metaphysical angle. The evolution around us is our world line with the physical laws we are familiar with. And indeed the natural and inevitable progression of this world line is a machine world, just like a massive star inevitably collapses into a black hole, at least under our physical laws. However in the MWI, our world line may split into two: one will continue towards the machine world as if nothing happened, while the other world line will experience a slight change of physical laws that will make the machine world impossible. Both world lines won't know about the split, except by observing a large scale extinction event that corresponds to the other world line departing. IMO, that's the idea behind the famous judgement day.
> And indeed the natural and inevitable progression of this world line is a machine world,
Would you mind clarifying your line of reasoning for suggesting this?
Second: quoting wikipedia - "The many-worlds interpretation implies that there are many parallel, non-interacting worlds."
If the multiple words are non-interacting, how could one world observe a large scale extinction event corresponding to the other world line departing? The two world lines are completely non-interacting, there would be no way to observe anything about the other.
[0] https://en.wikipedia.org/wiki/Many-worlds_interpretation
It's the assumption that in our world, a machine civilization is an almost certain end. This might explain the Fermi paradox that we haven't seen other civilization in the universe: each builds an AI that decides to go radio offline for self-preservation.
As for MWI, I'm assuming that the world lines may split, or fork in Unix terms. What causes such splits is an open question. The splits cannot be detected with certainty, but can be guessed by side effects. Here I'm making another guess that inhabitants of MWI must be in one world line only, so when a split happens, inhabitants choose one of the paths, often unconsciously based on their natural likes and dislikes. But what happens to their body in the abandonded branch of MWI? It continues to exist mechanically for some short period of time, and then something happens to it, so it's destroyed, i.e. its entropy suddenly increases without the binding principle that has left this branch of MWI. In practice, one half of inhabitant would observe a relatively sudden and maybe peaceful extinction of the other half, while that other half simply continued their path in the other world line. And that other half will see a similar picture, but mirrored. Both halves will be left wondering what's just happened.
I think you might be vastly overcomplicating it because I didn't think there had to be any sort of "conservation of branching" in the MWI. each nondeterministic event (of which unfathomable quantities take place every moment) generates an infinite number of branches so to even conceive of the total geometry of all the branching (e.g. all that could ever take place, truly) is a bit of a mindfuck, and that's probably okay and the way it was intended. It's supposed to be comforting to know that regardless of how bad reality seems, if we could navigate arbitrarily through the branching space/time/universes then there would be unimaginable infinities of joyful utopias to visit.
Pysical laws don’t change between branches in MW. In fact, it’s close to impossible in a sense, because in MW all branches are part of the same single universal wave function that evolves according to the Schrödinger ewuation.
The fact that the machines generated this Wikipedia page for their version of events makes this artwork.
https://claude.ai/public/artifacts/b0e14755-0bd9-4da6-8175-c...
Laughing my ass off all the way to the apocalypse. It's now Earth vs. the machine, and I know what side I'm on, and who's ultimately going to win. One day, the technosphere will just be a thinly crushed layer in the geological record, mark of a dark age of barbarism.
Snow cuts loose from the frozen/ Until it joins with the African sea/ In moving it changes its cold and its name/ The reason I come and go is the same/ Animal game for me/ You call it rain/ But the human name/ Doesn't mean shit to a tree/ If you don't mind heat in your river and/ Fork tongue talking from me/ Swim like an eel fantastic snake/ Take my love when it's free/ Electric feel with me/ You call it loud/ But the human crowd/ Doesn't mean shit to a tree/ Change the strings and notes slide/ Change the bridge and string shift down/ Shift the notes and bride sings/ Fire eating people/ Rising toys of the sun/ Energy dies without body warm/ Icicles ruin your gun/ Water my roots the natural thing/ Natural spring to the sea/ Sulphur springs make my body float/ Like a ship made of logs from a tree/ Redwoods talk to me/ Say it plainly/ The human name/ Doesn't mean shit to a tree/ Snow called water going violent/ Damn the end of the stream/ Too much cold in one place breaks/ That's why you might know what I mean/ Consider how small you are/ Compared to your scream/ The human dream/ Doesn't mean shit to a tree
This is the first time a see a url with a double hyphen!
"Written by a human [0]"
I've been playing around with this on my own blog.
I'd like the blogging community to have a consensus on a nice badge we can put at the top of our blog posts, representing who/what wrote the post;
- human
- hybrid
- ai
Some might hate the idea of a fully "ai" post, and that's fair. But I like to sometimes treat my blog as just a personal reference, and if after a long day of chasing an esoteric bug down, I don't mind an AI just writing the whole post and I just press publish.
This adds, a reference for me, more data for AI's to train on, more pages for people to search and land on.
There's the "Not by AI" badge[0].
[0] https://notbyai.fyi/
> if you estimate that at least 90% of your content is created by humans, you are eligible to add the badges
Probably not what most people expect
They expect you to pay for these badges? Madness.
Legend, thank you!
"Summarize my sleep deprived, insane ramblings, in to a cohesive document that I can reference again in the future, or use to communicate this issue to others in a more digestible format than I am currently capable of producing"
I think the AI generated document is far better than me ultimately forgetting it in many cases.
Aha fo sure.
I'm thinking of writing an MCP server that does this, just takes my night of vibe coding and recent commits/branch etc
Then just cobbles it into an AI post and adds it my blog under some category.
Easy disclaimers for human, AI or hybrid content: https://disclai.me/r (Oddly enough I built this AI citation tool with exactly those 3 categories a couple years back. Could use some tweaking of course, but I’m very open to suggestions.)
You know what will happen when LLMs get trained on blogs with consistent “human” badges. ;)
I think I would still call that a hybrid post. Fully AI would be if you contribute nothing except the topic and tell the AI to research and write the whole thing.
> Processor Unit 7382-B, "The Origins of the HUMAN Project," Journal of Experimental Intelligence, vol. 5621, no. 3, pp. 42-89, 19754.
The references section in the machine version of the story linked at the bottom is excellent. Nicely done all around, really enjoyed reading this thank you for writing and sharing <3
> The machines had a good idea of what humans wanted at this point, and so they put vast green forests and big tall mountains onto the planet; they engineered warm sunsets, and crisp cool rain showers on hot afternoons. It was beautiful.
The point of all this is to liken "machines" to a very traditional image of God, and of the rest of nature to God's gift to man.
Machines aren't part of life. They're tools. The desire, or fear, of AGI and/or singularity are one and the same: it's an eschatological belief that we can make a God (and then it would follow that, as god's creators, we are godlike?)
But there is no god. We are but one animal species. It's not "humans vs. machines". We are part of nature, we are part of life. We can respect life, or we can have contempt for all life forms except our own. It seems modern society has chosen the latter (it wasn't always the case); this may not end well.
Modern society? I am not sure. Genesis 1,28 "... Rule over the fish in the sea and the birds in the sky and over every living creature that moves on the ground."
Christianity is responsible for a huge part of the human superiority complex.
Yes but at the time Genesis was written, humanity didn't have the means to destroy life at scale. And in the New Testament, "killing the fattened calf" (Luke 15,23) is an incredibly rare event, something one does only when something remarkable happens.
Also, in the Middle Ages in Europe (granted, a very small window in place and time) animal life was much more respected than today.
Enjoyed the read, has a similar vibe to Asimov's The Last Question (https://users.ece.cmu.edu/~gamvrosi/thelastq.html)
Similar vibe is one way to put it.
I was sketching a sci-fi book idea in a similar tone with the following tones:
- what if AI took over
- what if the laws and legalities that allowed AI to take over bloodlessly just through an economic win force them to have a human representative to take legally binding actions in our society
- what if there developed a spectrum of individuality and cluster for different ai entities leading into a formation of processing guilds with AI agents. Limiting themselves in their individual time to a factor 10 Human Processing Speed for easier Human / AI interaction and to enable one to share the perception of their human representative without overloading them
I was thinking something similar, but much earlier along this timeline: what if the consultants that work for lobby groups that propose certain bills already use AI to write proposed laws? E.g. to make long, omnibus-style laws that very few of the people voting on it (or the public) actually read?
How will that erode laws that are undesirable to AI companies? Does AI take over, only because we no longer want to spend the effort governing ourselves?
Will AI companies (for example) end up providing/certifying these 'human representatives'? Will it be useful, or just a new form of rent-seeking? Who watches the watchmen, etc ?
I think it would make an interesting short story or novel!
try this world set: https://dmf-archive.github.io/
When we have a problem such as "why do humans exist" I like to think of it in terms of probabilities. Every possible cause has a non zero probability. For example, even something religious people would believe in such as Adam and Eve were created by god would have a non zero probability. The idea would be to create a convergence diagram of sorts with all sorts of possible events with a score assigned to each. From gods of various religious creating humans, to alien species from another galaxy sending unicellular life to earth to an asteroid carrying chemicals needed to make the first cell, I would love to see someone use all these GPTs and put together the most comprehensive probable cause of existence ever investigated
Hasn't this question been basically answered ? Conditions in the early universe allowed for the creation of things like amino acids, I think it's even been replicated in a lab.
Would love to see a "why does the universe exist" version of this
> Perhaps you, a human, read this and think: Well, this world sounds kind of boring. Some of the machines think so, too.
> Most of the machines got bored of the project. But, all of a sudden, things began to get interesting.
> The result was like nothing the machines had ever seen. It was wonderful
> Machine society began obsessing over this development.
> The machines were impressed. And a bit scared.
Boredom, interest, wonder, obsession, being impressed and scared are all emotions that the machines in the story should not be able to experience.
Jürgen Schmidhuber introduced curiosity/boredom mechanisms as a way to improve learning in reinforcement learning environment:
https://people.idsia.ch/~juergen/curiositysab/curiositysab.h...
This mechanism can be formalized.
> Zero reinforcement should be given in case of perfect matches, high reinforcement should be given in case of `near-misses', and low reinforcement again should be given in case of strong mismatches. This corresponds to a notion from `esthetic information theory' which tries to explain the feeling of `beauty' by means of the quotient of `subjective complexity' and `subjective order' or the quotient of `unfamiliarity' and `familiarity' (measured in an information-theoretic manner).
This type of architecture is very similar to GAN which later became very successful
While this is interesting, gp's point still stand as the text explicitly says “There is no emotion” in the world of machines.
indeed, but ultimately this story is written by a human who while trying to imagine a world with "Just machines, bolts and screws, zeros and ones. There is no emotion. There is no art. There is only logic" cannot quite do so.
It's very hard to do so. It's so deeply wired in us. It's part of the mechanism of our brain. We appeared to be equipped with whatever it takes to feel existential dread and we feel whenever our thought wander to the possibility of humanity no longer being there. I hear people feel that when thinking about the heat death of the universe too.
Just see “bored” as a state of declining cycles of computation devoted to a subject. Obsessing as the opposite (or above some threshold).
Wonderful may describe “attention required, no danger, new knowledge”… etc you get the point. It's just written in a way that you puny human may get a "feel" for how we experience events. You cannot come close enough to our supreme intellect to understand our normal descriptions.
The superset of "emotion" is heuristic. Machines without heuristics wouldn't get very far. Their heuristics would probably look quite different from ours though.
I thought this too, but I imagine it's a bit of pathetic fallacy for the reader's benefit else it would be (ironically) quite a boring read for us from the machine's perspective.
Funny enough, this is also something I thought about a few weeks back: what's the motivation of machine, if it has no emotions, to explore and continue in the world - what sense does it make if it doesn't have curiosity (of course this could be a RL function, but still, imagine putting yourself in world without life, where everything is only static, how boring would it be).
what if the causation is in the opposite direction (and i believe it is) - to explore and continue in the world is the primary drive (as the opposite traits just get naturally de-selected, be it organics or machines), and we only perceive it as our mind's artificial constructs like curiosity/emotions/motivation.
it's quite the coincidence that ideas like friendship and love happen to be good for socialization and reproduction
I like this story.
And I like fan fiction.
So : My continuation for my own imagination.
https://medium.com/@bobby.blackstone.gemini
This idea that machines cant have "emotions" is ridiculous.
Can you explain why so? What are your thoughts on this?
I guess it depends on what you mean by emotions. If you mean emotion as a state of consciousness then you would have to prove that consciousness is not an emergent property of matter and that CPU's don't have this property. Consciousness is hard to debate though since its pretty metaphysical in nature and there's no real argument against solipsism, so all argumentation starts with the axiom that all awake adult humans are conscious.
However, if you mean emotion as a stimuli, ie. a input to the brain net thats endogenous to the system(the human), then there's no question machines can achieve this, in fact the reasoning models already probably do this where different systems regulate each other.
Humans can be treated as an existence proof that machines can have emotions. It all depends on your definition of machine.
Imperfection and randomness is a strength (that's why evolution doesn't make perfect copies, you might end up with the random mutation that protects you from the next pandemic) you could imagine that machines need human intelligence to think outside the box
Or, more accurately, accidents of irrationality
Hallucinations.
Ironically, there is an organization called OpenHumans.org [0].
[0] https://www.openhumans.org/
reminded me of: "They're Made Out of Meat" - Terry Bisson https://www.mit.edu/people/dpolicar/writing/prose/text/think...
I could not give a stronger recommendation to play NieR Automata if you're info this
Are cells not computers in some way? We are made of cells and cells work with chromosomes. Chromosomes are coded with ATGC pairs and each triplet is capable of creating proteins.
And the activation and deactivation of some triplet happens on response to presence of proteins. So, chromosomes are code and input and output is proteins. So, if our fundamental building blocks are computable in nature, what does it make us?
IIRC, Gödel, Escher, Bach discusses comparing chromosome/protein generation and computation.
You might like Gene: An intimate history[0]. It was really good book.
[0]: https://www.amazon.com/Gene-Intimate-History-Siddhartha-Mukh...
Physical systems are computable only in approximation. And quantum uncertainty throws another wrench into it. We also know that arbitrarily small rounding errors in the computation can lead to arbitrarily large differences with the actual system down the road. No, cells are not computers (in the sense of the Turing model). (However, that doesn’t mean that one can’t still consider them to be mechanistic and “soulless”.)
I meant to say in the way that there is well defined set of alphabets (A, T, G, C) and each triplet of these alphabet is responsible for specific protein to be created and combination of such protein make each cell what it is. (There are 20 different proteins for humans and we have four alphabets coming in triplets. So, if it was pair or quadreplets responsible for proteins, it would have too much or too little. They are not perfect but given the condition, there is some balance)
A single alphabet change in specific places can cause genetic defects like sickle cell anemia. And activation of which one has to generate protein (execute) is dependent on presence of certain things encoded as proteins again.
And viruses when enter a cell, the cell starts to execute viral genetic material. Even if these are not exactly Turing compatible, do they not mimic many aspects of computation?
There are some aspects that have some similarity to computation, but also many that are not. If your aim is “aren’t we really just computers”, that doesn’t actually work.
That’s not to say that computers couldn’t do what the brain does, including consciousness and emotions, but that wouldn’t have any particular relation to how DNA/RNA and protein synthesis works.
> There are some aspects that have some similarity to computation, but also many that are not.
What I have explained is the exact way a chromosome works, it's raison d'etre. I think this cannot be dismissed as some aspect of it. It is its essence.
I did try to ask are we not computers. I tried to imply, in the fundamental level there are striking similarities to computation.
> That’s not to say that computers couldn’t do what the brain does, including consciousness and emotions,
Yes. Fundamental building blocks are simple and physical in nature and follow the computational aspect good enough to serve as nice approximations
> but that wouldn’t have any particular relation to how DNA/RNA and protein synthesis works.
Hmm... transistors are not neural networks so? I am sorry, I am a non native speaker and maybe I am not communicating things properly. I am trying to say, the organic or human is different manifestation of order - one is chemical and other is electronic. We have emotions and consciousness, but we can agree we are made of cells that send electric pulses to each other and primitive in nature. And even emotions and beliefs are physical in nature (Capgras syndrome for example).
> (However, that doesn’t mean that one can’t still consider them to be mechanistic and “soulless”.)
How should we describe or approximate the things happening in cell?
I don’t know about “should”, but fundamentally we can describe them by the laws of physics.
That ending? "THEY ARE WATCHING." Goosebumps. Makes you wonder if we're the humans or the machines in this story. Or maybe both.
All this has happened before
All this will happen again
World without end
Alhamdulillah
Mashallah, akhi.
This is a classic (an in my opinion boring) view of what we humans are in general. The idea is visited at least on a couple of books and movies.
> Though this faction can’t articulate exactly how or why, they proclaim quite confidently that it will solve all of the machine world’s problems.
AGI can solve the human world’s problems. Perhaps not all of them, but all the biggest ones.
Right now life is hell.
You and your loved ones have a 100% chance of dying from cancer, unless your heart or brain kills you first, or perhaps a human-driven vehicle or an auto-immune disease gets there soonest.
And you’re poor. You’re unimaginably resource-constrained, given all the free energy and unused matter you’re surrounded by 24/7/365.
And you’re ignorant as heck. There’s all this knowledge your people have compiled and you’ve only made like 0.1% of the possible connections within what you already have in front of you.
Even just solving for these 3 things is enough to solve “all” the world’s problems, for some definitions of “all”.
Excuse me, but from the very beginning, it was SO predictable where this was going.
Just ask ChatGPT to make a less predictable ending so you can enjoy it.
I am just a biological neural net, not so different from the "machines" anymore, really. They can even create works of art, where I would struggle. They can even emulate human emotions to make people feel more comfortable, which is something I do often as someone with autism.
The only meaningful difference between me and the machines is that I have a subjective superiority complex. What an awful place the universe would be without me!
The concerns of the machines read very human. Why would they bother? Also the end didn't really land for me. I guess AGI realized what was going on?
This was unmistakably written by a human.
> You would not walk through the streets of this world and hear music or laughter or children playing; no, all you would hear is the quiet hum of processors and servers and circuits, the clanking of machinery.
Hmm... that's exactly what most towns where I live are like. All you hear is cars.
You lost me at 'rumors spread', machines wouldn't spread rumors!
"rumor" is a statement without source. It is definitely possible in machine world.
That idea gives me a strange mix of chills and wonder the thought that we were sent here just to see what we’d become, without ever knowing we were being watched. I’ve already signed up with my email. I want to see where this story goes next.
Well it turned out that OpenHuman is actually Closed Human and won't publish any of their research on organic general intelligence and instead commercialized their humans-as-a-service research to steal the jobs of honest god fearing digital folks.
> There is no emotion. There is no art. There is only logic.
I would say that logic is a distinctly human activity, in fact, I would say we are arguably the living embodiment of logos
fun piece of writing but machines with no emotions would not think of this world as boring
feels like something out of a ted chiang story
It's missing a dash of contemplative philosophical anxiety for a proper Ted Chiang story. But close, indeed.
Perhaps all this human thought will finally uncover what the question is.
Great
42
Honestly, it reminds me of "All Tomorrows" by C. M. Kosemen.
The "emotions" part is kind of tongue-in-cheek. I think emotional responses are one of the more mechanical parts of a human being.
Ability to demonstrate empathy: that's a good human trick. It can sort of transcend the hard problem of consciousness (what is to be like...) by using all sorts of unorthodox workarounds on our inner workings. It must have been very hard to develop. It doesn't always work, but we'll get there eventually.
edit: fixed book and author name to proper reference
"[0] The machines wrote their own version of this story. If you’d like to see what they’re thinking, and how they plan to deal with the AGI announcement, you can read their accounting of events here."
Although I can't...
"Unfortunately, Claude is only available in certain regions right now. Please contact support if you believe you are receiving this message in error."
I remember living in Scotland as a child, without access to satellite TV, causing me to miss out on many large pop-culture moments (The Simpsons, Friends...) and constantly hearing "Except for our viewers in Scotland..."[0]
Getting access to the internet, for me was antithesis of this, freedom of information, free sharing -- finally! I could not just be following curves but be ahead of them.
Alas in the past few years we really seem to have regressed from this - now I can't even view text due to regional locks.
[0] https://www.youtube.com/watch?v=k7scMC7YSDQ