> xtending the analysis to 50,220 fact-checking posts from dedicated debunking pages, Zollo et al. [288] find that corrective information remains almost entirely confined to the scientific echo chamber: approximately two-thirds of likes on debunk-ing posts come from science-oriented users, and only a small fraction of conspiracy-oriented users engage with such content.
Sentiment analysis of comments, based on a supervised clas-
sification model, reveals that responses to debunking posts are predominantly negative, regardless of the commenter’s orienta-tion. Strikingly, the rare conspiracy users who do interact with debunking tend to increase their subsequent activity within the conspiracy echo chamber, suggesting that exposure to dissent-ing information can reinforce rather than attenuate prior beliefs.
Otherwise stated, results indicate that the spread of misinfor-mation online is less a problem of information scarcity than of entrenched structural and cognitive segregation, where homo-geneity and polarization govern the dynamics of both the diffu-sion of false narratives and the reception of their correction.
Use this the next time someone promises that online conversations naturally tend towards insight.
All this “physics of news” framing repeats the same mistake mainstream economics made decades ago, confusing human action with measurable physical phenomena. People aren’t particles, and opinions aren’t spin states. As Mises and Hazlitt argued, mathematical models give spurious precision when applied to purposeful behavior: they hide their "arbitrary assumptions" behind elegant equations. Treating communication, belief, and motivation as quantifiable variables may look rigorous, but it strips away meaning, choice, and context and the very essence of human decision. What results isn’t insight, but an illusion of control dressed up as science.
Physics operates on mechanical causality while human behavior does not.
"All models are wrong; but some are useful." - George Box
Even particles aren't actually particles, nor spin states actually spin states. The map is not the territory. Physics models are only useful (and "correct") to the extent that they make successful predictions. If the adaptation of these principles to social communication yields useful predictions, then however inaccurate they may be in reproducing the exact nature of what they model, they are nonetheless useful and therefore worthwhile. FTA: "In summary, we review both empirical findings based on massive data analytics and theoretical advances, highlighting the valuable insights obtained from physics-based efforts to investigate these phenomena of high societal impact."
Natural scientific models use the vocabulary of pre-existing intuitions we have from interacting with the world: particles, rotation, etc. We try to simplify predictions by mapping experimental outcomes using things we already know and can directly perceive.
Using these abstractions as foundation for models of social dynamics and modern media feels a little wrong: the mapping is imperfect and incomplete, and as these physical models inevitably become outdated it will be more and more difficult to make sense of social dynamics models that use these physical models as a given.
The word “some” in the quote from Box is doing a lot of heavy lifting.
If a model is useful, I’d like to see it being used (outside academia, where there’s minimal penalty for complexity and a high emphasis on novelty).
If models like these are widely adopted at social media companies or news agencies, it’s fair to say OP’s take isn’t valid. Otherwise they may have a point.
Agreed. But these kinds of models almost always start in academia, that's one of the big reasons we have academia, to explore ideas that may (or may not) be useful. My point was that you can't prejudge the usefulness of a model simply because it doesn't fully replicate all the complexity of the phenomenon being modeled.
These ideas are used, and they influence what policy is crafted.
You can’t predict what an individual will do, but work like this kills many inaccurate ideological positions that we inherited.
There’s a paper from 2016 that shows how posts saturate/cascade through conspiracy communities and that it has distinct cascade dynamics. This wasn’t a model, it was a description of observed behavior.
Or take some relatively recent work from Harvard, which suggests that while our capacity to create misinformation has increased in both quantity and quality, its consumption rate seems to be stable.
> kills many inaccurate ideological positions that we inherited.
It doesn't, which is part of the point the OP is making. And now my point, it's ok that these pseudo-scientific "revelations" don't kill those "inaccurate ideological positions", because that's the whole point of human free will, there's no "accurate ideological position" when it comes to the day-to-day life, or to societal life in general.
People are not that complicated, and there's abundant evidence that physical models of social phenomena have decent predictive power, notwithstanding all the intervening complexity. The argument of statistical physics is not that people are particles, but that the equations we've found good at describing various natural phenomena work in social contexts because those processes tend toward efficiently conserving energy and the like. Same reason many human and animal activities tend toward power-law distributions.
You should deploy a quant trading model based on your predictive behavior theory and compound your way to be the richest person on earth. It's not that complicated apparently!
I think GP meant to say that people are not that sophisticated. Newtonian physics is not very sophisticated but fortunes have made and broke on deploying it (Roulette, dice, coinflipping etc)
statistical physics, otoh, is slightly more sophisticated than the median human so some experts have indeed made money building related models
Did you read the paper? Or even the intro? If a model has predictive power, it's capturing something, end of story. What you do with it, popsci spins it, how you interpret it has nothing to do with whether or not it's useful. That's your projection. Everything you are saying it doesn't do as if it's an argument against the paper happens to overlap perfectly with the things it never claimed to do.
Predictive power alone doesn’t equal causal understanding. The paper models news and opinion spread as physical processes that may (over)fit observed data, but it never establishes why these patterns occur. No counterfactuals, no intervention logic, no identification strategy. As causal inference work (like Stefan Wager's) makes clear, explanation demands more than correlation. Treating human communication as node-to-node contagion might predict past outcomes, but it misses the purposive, context-driven nature of choice. So while the model captures statistical regularities, it lacks the causal rigor needed to claim genuine understanding of human behavior.
I'm assuming you've never predicted things in practice for a living? e.g. as a quant trader? Quants have something called a "deflated sharpe ratio" since p-hacking / overfitting historical data is such a common thing and results in losses when projected into the future.
Human action is entirely causally dependent on human psychology vis a vis biology, of which we have now only a rudimentary formal understanding and certainly not a sufficient model of its structure, let alone the relation between its aspects and the resulting actions.
At the same time, statistical methods are interesting and suggestive but should be understood at the relatively coarse level they inhabit.
Both approaches have their uses and it is worth delineating the boundary between their respective appropriate contexts.
Human action isn’t just another physical process because it’s driven by intention, not mechanical causation. A rock falls because gravity compels it; a person acts because they want something to happen. That difference makes human behavior fundamentally qualitative. It’s rooted in meaning, interpretation, and choice. You can measure motion, but you can’t measure purpose. Once you strip away intention to fit behavior into a mathematical model, you’re no longer describing human action. You’re describing an abstraction that behaves like a machine. The numbers might be tidy, but they stop representing what people actually do.
Do you not believe that people sometimes seek out media that confirms their existing biases?
Long Term Capital Management went bankrupt because they believed they could model human behavior with their team of Nobel Prize winning economists and Fields Medal mathematicians. Most machine learning quant funds fail for a reason [1]. Behavior is highly unpredictable by modeling in practice.
Gas isn't a continuous medium, either; it's made up of particles. (Not even true particles, either, and sometimes that matters.) We can still create equations for sound waves, though.
Now, sure, humans are more complicated than gas molecules, and have an element of choice in what they do. But in bulk, they still behave in ways that can be modeled mathematically - perhaps not perfectly, but enough that it can still give some actual insight.
So, an Ising model?
(reads the article)
It's an Ising model!
Harry Seldon et. Al
Ooh my kind of paper.
Also an older paper that I was introduced to, that exposed me to the concept of cascades sizes.
https://www.pnas.org/doi/10.1073/pnas.1517441113
And they went into it too:
> xtending the analysis to 50,220 fact-checking posts from dedicated debunking pages, Zollo et al. [288] find that corrective information remains almost entirely confined to the scientific echo chamber: approximately two-thirds of likes on debunk-ing posts come from science-oriented users, and only a small fraction of conspiracy-oriented users engage with such content. Sentiment analysis of comments, based on a supervised clas- sification model, reveals that responses to debunking posts are predominantly negative, regardless of the commenter’s orienta-tion. Strikingly, the rare conspiracy users who do interact with debunking tend to increase their subsequent activity within the conspiracy echo chamber, suggesting that exposure to dissent-ing information can reinforce rather than attenuate prior beliefs. Otherwise stated, results indicate that the spread of misinfor-mation online is less a problem of information scarcity than of entrenched structural and cognitive segregation, where homo-geneity and polarization govern the dynamics of both the diffu-sion of false narratives and the reception of their correction.
Use this the next time someone promises that online conversations naturally tend towards insight.
All this “physics of news” framing repeats the same mistake mainstream economics made decades ago, confusing human action with measurable physical phenomena. People aren’t particles, and opinions aren’t spin states. As Mises and Hazlitt argued, mathematical models give spurious precision when applied to purposeful behavior: they hide their "arbitrary assumptions" behind elegant equations. Treating communication, belief, and motivation as quantifiable variables may look rigorous, but it strips away meaning, choice, and context and the very essence of human decision. What results isn’t insight, but an illusion of control dressed up as science.
Physics operates on mechanical causality while human behavior does not.
"All models are wrong; but some are useful." - George Box
Even particles aren't actually particles, nor spin states actually spin states. The map is not the territory. Physics models are only useful (and "correct") to the extent that they make successful predictions. If the adaptation of these principles to social communication yields useful predictions, then however inaccurate they may be in reproducing the exact nature of what they model, they are nonetheless useful and therefore worthwhile. FTA: "In summary, we review both empirical findings based on massive data analytics and theoretical advances, highlighting the valuable insights obtained from physics-based efforts to investigate these phenomena of high societal impact."
Natural scientific models use the vocabulary of pre-existing intuitions we have from interacting with the world: particles, rotation, etc. We try to simplify predictions by mapping experimental outcomes using things we already know and can directly perceive.
Using these abstractions as foundation for models of social dynamics and modern media feels a little wrong: the mapping is imperfect and incomplete, and as these physical models inevitably become outdated it will be more and more difficult to make sense of social dynamics models that use these physical models as a given.
The word “some” in the quote from Box is doing a lot of heavy lifting.
If a model is useful, I’d like to see it being used (outside academia, where there’s minimal penalty for complexity and a high emphasis on novelty).
If models like these are widely adopted at social media companies or news agencies, it’s fair to say OP’s take isn’t valid. Otherwise they may have a point.
Agreed. But these kinds of models almost always start in academia, that's one of the big reasons we have academia, to explore ideas that may (or may not) be useful. My point was that you can't prejudge the usefulness of a model simply because it doesn't fully replicate all the complexity of the phenomenon being modeled.
These ideas are used, and they influence what policy is crafted.
You can’t predict what an individual will do, but work like this kills many inaccurate ideological positions that we inherited.
There’s a paper from 2016 that shows how posts saturate/cascade through conspiracy communities and that it has distinct cascade dynamics. This wasn’t a model, it was a description of observed behavior.
Or take some relatively recent work from Harvard, which suggests that while our capacity to create misinformation has increased in both quantity and quality, its consumption rate seems to be stable.
> kills many inaccurate ideological positions that we inherited.
It doesn't, which is part of the point the OP is making. And now my point, it's ok that these pseudo-scientific "revelations" don't kill those "inaccurate ideological positions", because that's the whole point of human free will, there's no "accurate ideological position" when it comes to the day-to-day life, or to societal life in general.
People are not that complicated, and there's abundant evidence that physical models of social phenomena have decent predictive power, notwithstanding all the intervening complexity. The argument of statistical physics is not that people are particles, but that the equations we've found good at describing various natural phenomena work in social contexts because those processes tend toward efficiently conserving energy and the like. Same reason many human and animal activities tend toward power-law distributions.
You should deploy a quant trading model based on your predictive behavior theory and compound your way to be the richest person on earth. It's not that complicated apparently!
I think GP meant to say that people are not that sophisticated. Newtonian physics is not very sophisticated but fortunes have made and broke on deploying it (Roulette, dice, coinflipping etc)
statistical physics, otoh, is slightly more sophisticated than the median human so some experts have indeed made money building related models
Did you read the paper? Or even the intro? If a model has predictive power, it's capturing something, end of story. What you do with it, popsci spins it, how you interpret it has nothing to do with whether or not it's useful. That's your projection. Everything you are saying it doesn't do as if it's an argument against the paper happens to overlap perfectly with the things it never claimed to do.
Predictive power alone doesn’t equal causal understanding. The paper models news and opinion spread as physical processes that may (over)fit observed data, but it never establishes why these patterns occur. No counterfactuals, no intervention logic, no identification strategy. As causal inference work (like Stefan Wager's) makes clear, explanation demands more than correlation. Treating human communication as node-to-node contagion might predict past outcomes, but it misses the purposive, context-driven nature of choice. So while the model captures statistical regularities, it lacks the causal rigor needed to claim genuine understanding of human behavior.
I'm assuming you've never predicted things in practice for a living? e.g. as a quant trader? Quants have something called a "deflated sharpe ratio" since p-hacking / overfitting historical data is such a common thing and results in losses when projected into the future.
are you claiming that human action is not measurable physical phenomena?
Human action is entirely causally dependent on human psychology vis a vis biology, of which we have now only a rudimentary formal understanding and certainly not a sufficient model of its structure, let alone the relation between its aspects and the resulting actions.
At the same time, statistical methods are interesting and suggestive but should be understood at the relatively coarse level they inhabit.
Both approaches have their uses and it is worth delineating the boundary between their respective appropriate contexts.
Human action isn’t just another physical process because it’s driven by intention, not mechanical causation. A rock falls because gravity compels it; a person acts because they want something to happen. That difference makes human behavior fundamentally qualitative. It’s rooted in meaning, interpretation, and choice. You can measure motion, but you can’t measure purpose. Once you strip away intention to fit behavior into a mathematical model, you’re no longer describing human action. You’re describing an abstraction that behaves like a machine. The numbers might be tidy, but they stop representing what people actually do.
Do you not believe that a person's wants can be shaped by the media they are exposed to?
Do you not believe that people sometimes seek out media that confirms their existing biases?
Long Term Capital Management went bankrupt because they believed they could model human behavior with their team of Nobel Prize winning economists and Fields Medal mathematicians. Most machine learning quant funds fail for a reason [1]. Behavior is highly unpredictable by modeling in practice.
[1] https://www.garp.org/hubfs/Whitepapers/a1Z1W0000054x6lUAA.pd... [1] https://www.youtube.com/watch?v=BRUlSm4gdQ4
Gas isn't a continuous medium, either; it's made up of particles. (Not even true particles, either, and sometimes that matters.) We can still create equations for sound waves, though.
Now, sure, humans are more complicated than gas molecules, and have an element of choice in what they do. But in bulk, they still behave in ways that can be modeled mathematically - perhaps not perfectly, but enough that it can still give some actual insight.