Before you start to feel smug about this and think you're above it.
I've read three blog posts in as many days where their authors quietly reflect that Claude is so good that it has effectively hijacked their own decision making processes when they weigh the value of starting a project.
Do they embark on it, or hand it over to Claude, even if the process is mind-numbing and you learn nothing?
Read it. Was not impressed. While the hypothesis is sound ( and likely to be true ), the paper itself is a microcosm of everything wrong with papers these days. For example, referenced "Web Appendix Table W3", which promises seed prompts is missing..
You do not care about foundational aspect of the paper? Bold. The theory may be interesting AND wrong based just on their initial seeds. Are you ok with that?
Listen, I am not some anal, super-obsessive crazie, who gets his jollies out of finding minor discrepancies ( though it is fun in the middle of the argument ). The article was interesting so I wanted to understand it and now I can't ( well, not without some hunting anyway ). Yeah, I am horrible person for asking that basic editorial standards are applied. The funny thing is, only a decade ago that kind of shoddy work would not fly.. or at least not be so commonly found.
Doesn't sound like there was any incentive to get the answer right, so why would anyone bother fact checking AI answers. These marketing researchers are basically trying to rebrand path of least resistance to be a new thing?
>These marketing researchers are basically trying to rebrand path of least resistance to be a new thing?
Or they're trying to show how the "path of least resistance" applies to AI use, but you took the path of least resistance and made an uncharitable interpretation of their paper :)
Not really, the same study could be done with giving people a calculator to do long division. How much participants bother to check the work is a function of 1) their expectations for how accurate the tool is 2) how much time they are afforded 3) what the upside of additional accuracy is. Just because they largely default to accepting the answers at face value doesn't mean they are experiencing "cognitive surrender", and doesn't mean calculators are some 4th system of thinking.
Before you start to feel smug about this and think you're above it.
I've read three blog posts in as many days where their authors quietly reflect that Claude is so good that it has effectively hijacked their own decision making processes when they weigh the value of starting a project.
Do they embark on it, or hand it over to Claude, even if the process is mind-numbing and you learn nothing?
Read it. Was not impressed. While the hypothesis is sound ( and likely to be true ), the paper itself is a microcosm of everything wrong with papers these days. For example, referenced "Web Appendix Table W3", which promises seed prompts is missing..
Couldn't care less for a missing table or figure, or even for the whole paper. The theory is interesting in itself.
You do not care about foundational aspect of the paper? Bold. The theory may be interesting AND wrong based just on their initial seeds. Are you ok with that?
Yes. I don't believe such theories are proven or disproven based on "data" and figures.
Interesting. How are they proven or disproven if 'data' and figures are not appropriate?
This is teh interwebz. Graphics and footnote4s and attachments get garbled and lost. sigh
Listen, I am not some anal, super-obsessive crazie, who gets his jollies out of finding minor discrepancies ( though it is fun in the middle of the argument ). The article was interesting so I wanted to understand it and now I can't ( well, not without some hunting anyway ). Yeah, I am horrible person for asking that basic editorial standards are applied. The funny thing is, only a decade ago that kind of shoddy work would not fly.. or at least not be so commonly found.
Points taken.
Doesn't sound like there was any incentive to get the answer right, so why would anyone bother fact checking AI answers. These marketing researchers are basically trying to rebrand path of least resistance to be a new thing?
On brand for Wharton I guess.
>These marketing researchers are basically trying to rebrand path of least resistance to be a new thing?
Or they're trying to show how the "path of least resistance" applies to AI use, but you took the path of least resistance and made an uncharitable interpretation of their paper :)
Not really, the same study could be done with giving people a calculator to do long division. How much participants bother to check the work is a function of 1) their expectations for how accurate the tool is 2) how much time they are afforded 3) what the upside of additional accuracy is. Just because they largely default to accepting the answers at face value doesn't mean they are experiencing "cognitive surrender", and doesn't mean calculators are some 4th system of thinking.
The Wharton paper this is taken from was discussed last month (142 comments):
https://news.ycombinator.com/item?id=47467913
Then you "hack" System 3 and direct everyone to buy your advertiser's product. -- Someone at Google, probably.
LLMs are the final form of advertising and propaganda. Seamless and undisclosed. There's no possibility that it isn't the endgame.
“Step right up, We’re doing a new training run, and offering positive brand sentiment to the 1000 highest bidders.”
[dead]