Most of the folks on this topic are focused on Meta and Yann’s departure. But, I’m seeing something different.
This is the weirdest technology market that I’ve seen. Researchers are getting rewarded with VC money to try what remains a science experiment. That used to be a bad word and now that gets rewarded with billions of dollars in valuation.
That's been true for the last year or two, but it feels like we're at an inflection point. All of the announcements from OpenAI for the last couple of months have been product focused - Instant Checkout, AgentKit, etc. Anthropic seems 100% focused on Claude Code. We're not hearing as much about AGI/Superintelligence (thank goodness) as we were earlier this year, in fact the big labs aren't even talking much about their next model releases. The focus has pivoted to building products from existing models (and building massive data centers to support anticipated consumption).
A lot of them left in the first days on the job. I guess they saw what they were going to work on and peaced out. No one wants to work on AI slop and mental abuse of children on social media.
I don't understand how an intelligent person could accept a job offer from Facebook in 2025 and not understand what company they just agreed to work for.
If Claude Code is Anthropic’s main focus why are they not responding to some of the most commented issues on their GitHub? https://github.com/anthropics/claude-code/issues/3648 has people begging for feedback and saying they’re moving to OpenAI, has been open since July and there are similar issues with 100+ comments.
Hey, Boris from the Claude Code team here. We try hard to read through every issue, and respond to as many issues as possible. The challenge is we have hundreds of new issues each day, and even after Claude dedupes and triages them, practically we can’t get to all of them immediately.
The specific issue you linked is related to the way Ink works, and the way terminals use ANSI escape codes to control rendering. When building a terminal app there is a tradeoff between (1) visual consistency between what is rendered in the viewport and scrollback, and (2) scrolling and flickering which are sometimes negligible and sometimes a really bad experience. We are actively working on rewriting our rendering code to pick a better point along this tradeoff curve, which will mean better rendering soon. In the meantime, a simple workaround that tends to help is to make the terminal taller.
It’s surprising to hear this get chalked up to “it’s the way our TUI library works”, while e.g. opencode is going to the lowest level and writing their own TUI backend. I get that we can’t expect everyone to reinvent the wheel, but it feels symptomatic of something that folks are willing to chalk up their issues as just being an unfortunate and unavoidable symptom of a library they use rather than seeming that unacceptable and going to the lowest level.
CC is one of the best and most innovative pieces of software of the last decade. Anthropic has so much money. No judgment, just curious, do you have someone who’s an expert on terminal rendering on the team? If not, why? If so, why choose a buggy / poorly designed TUI library — or why not fix it upstream?
That issue is the fourth most-reacted issue, and third most open issue. And the two things above it are feature requests. It seems like you should at the very least have someone pop in to say "working on it" if that's what you're doing, instead of letting it sit there for 4 months?
Thanks for the reply (and for Claude Code!). I've seen improvement on this particular issue already with the last major release, to the extent that it's not a day to day issue for me. I realise Github issues are not the easiest comms channel especially with 100s coming in a day, but occasional updates on some of the top 10 commented issues could perhaps be manageable and beneficial.
It's entirely possible they don't have the ability in house to resolve it. Based on the report this is a user interface issue. It could just be some strange setting they enabled somewhere. But it's also possible it's the result of some dependency 3 or 4 levels removed from their product. Even worse, it could be the result of interactions between multiple dependencies that are only apparent at runtime.
>It's entirely possible they don't have the ability in house to resolve it.
I've started breathing a little easier about the possibilty of AI taking all our software engineering jobs after using Anthropic's dev tools.
If the people making the models and tools that are supposed to take all our jobs can't even fix their own issues in a dependable and expedient manner, then we're probably going to be ok for a bit.
This isn't a slight against Anthropic, I love their products and use them extensively. It's more a recognition of the fact that the more difficult aspects of engineering are still quite difficult, and in a way LLMs just don't seem well suited for.
Seems these users are getting it on VS code, while I am getting the exact same thing when using claude code on a Linux server over SSH from Windows Terminal. At this point their app has to be the only thing in common?
That's certainly an interesting observation. I wonder if they produce one client that has some kind of abstraction layer for the user interface & that abstraction layer has hidden or obscured this detail?
> Researchers are getting rewarded with VC money to try what remains a science experiment. That used to be a bad word
I’ve worked for multiple startups and I’ve watched startup job boards most of my career.
A lot of VC backed startups have a founder with a research background and are focused on providing out some hypothesis. I don’t see anything uncommon about this arrangement.
If you live near a University that does a lot of research it’s very common to encounter VC backed startups that are trying to prove out and commercialize some researcher’s experiment. It’s also common for those founders to spend some time at a FAANG or similar firm before getting VC funded.
Certainly research has made it into product with the help of the innovators that created the research. The dial is turned further here where the research ideas have yet to be tried and vetted. The research begins in the startup. Even in the dotcom era, the research prototypes were vetted in the conferences and journals before taking the risk to build production systems. This is no longer the case. The experiments have yet to be run.
I personally see this as a positive trend. VC in its earliest form was concerned with experiments that had high technology risk. I am thinking of companies like Genentech and scientists like biochemist Herbert Boyer, who had pioneered recombinant DNA technology.
After that, VC had become more like PE, investing in stuff that was working already but needed money to scale.
Yeah there has been some lamenting at all the money being thrown at technology hasn't been for anything truly game changing, basically just variations of full stack apps. A few failed mooonshots might be more interesting at least.
I agree, if anything spending money on high technology risk is Silicon Valley going back to its roots.
Nobody had a way to do silicon transistor manufacturing at scale until the traitorous eight flipped Shockley the bird and took a $1.4M seed investment from Sherman Fairchild.
Big bets on uncertain technology is what tech is supposed to be about.
It makes sense, it’s a simple expected value calculation.
There are trillions of labor dollars that can be replaced by software. The US alone has almost $12 trillion of labor annually.
If an AI company has a 10% shot of developing a product that can replace 10% of it, they are worth $120 billion in expected value. (These numbers are obviously just for illustration).
The unprecedented numbers are a simple function of the unprecedented market size. Nobody has ever had a chance of creating trillions of dollars of economic value in a handful of years before.
This doesn’t feel that new or surprising to me, although I suppose it depends what you consider the line between “science experiment” and “engineering R&D” to be.
Biotech has been a YC darling. Was Ginkgo Bioworks not doing science experiments?
Clean energy was a big YC fad roughly 15 years ago. Billions were invested towards scientific research into biofuels, solar, etc.
> This is the weirdest technology market that I’ve seen.
You must have not lived through the dot com boom. There was almost everything under the sun was being sold under a website that started with an "e". ePets, ePlants, eStamps, eUnderwear, eStocks, eCards, eInvites.....
Those things all worked, and all of those products still exist in one form or another. It was a business question of who would provide it, not a technology question.
It's funny that the Netherlands seems to still live in the dotcom boom to this day. Want to adopt a pet? verhuisdieren.nl. Want to buy wall art? wall-art.nl. Need cat5 cable? kabelshop.nl. 8/10 times there is a (legit) online store for whatever you need, to the point where one of the local e-commerce giants (Coolblue) buys this type of domain and aliases them to their main site.
Pretty funny, looks like it works in France too! animaux.fr redirects to a pet adoption service, cable.fr looks like a cable-selling shop. artmural.fr exists but looks like a personal blog from a wall artist, rather than a shop.
It did make sense though. ePlants could have cornered the online nursery market. That is a valuable market. I think people were just too early. Payment and logistics hasn’t been figured out yet.
Has someone done a survey to ask devs on how much they are getting done vs what their managers expect with AI? I've had conversations with multiple devs in big orgs telling me that Managers and dev's expectations are seriously out of sync. Basically its
Manager: Now you "have" AI, release 10 features instead of 1 in the next month.
Devs: Spending 50% more working hours to make AI code "work" and deliver 10.
If you think about Theranos, Magic leap, openai, anthropic they are all the same, one idea thats kinda plausible (well if you don't look too closely), have a slick demo, and well connected founders.
Much as a lot of people dislike LeCun (just look at the blind posts about him) he did run and setup a very successful team inside meta, well nominally at least.
Agree on weirdness but not on the idea of funding science experiments:
>> away from long-term research toward commercial AI products and large language models - LLMs
This feels more like what I see every day: the people in charge desperately looking for some way - any way - to capitalize on the frenzy. They're not looking to fund research; they just want to get even richer. It's pets.ai this time.
If a "science experiment" has the chance to displace most labor then whoever's successful at the experiment wins the economy, period. There's nothing weird or surprising about the logic of them obsessively chasing it. They all have to, it's a prisoner's dilemma.
Fusion power has the chance to displace most power generation, and whoever is successful at the experiment wins the energy economy, period. However given the long timelines, high cost of research, and the unanswered technical questions around materials that can withstand neutron flux, the total 2024 investment into fusion is only around $10B, versus AI's 250+B.
I think there are two reasons. First, with AI, you get see intermediate successes and, in theory, can derive profit from them. ChatGPT may not be profitable right now but in the longer run, users will be paying whatever they have to pay for it because they are addicted to using it. So it makes sense to try and get as many users as you can into your ecosystem as early as possible even if that means losses. With fusion, you won't see profitability for a very very long time.
The second reason is by how much it's going to be better in the end. Fusion has to compete with hydro, nuclear, solar and wind. It makes exactly the same energy, so the upside is already capped unlike with AI which brings something disruptive.
People are unsophisticated and see how convincing LLM output looks on the surface. They think it's already intelligent, or that intelligence is just around the corner. Or that its ability to displace labor, if not intelligence, is imminent.
If consumption of slop turns out to be a novelty that goes away and enough time goes by without a leap to truly useful intelligence, the AI investment will go down.
I can’t help but wonder: if we had poured the same amount of money into fusion energy research and development, how far might we have come in just three short years?
If a science experiment works and is transformational can be worth a trillion dollars, how much is it worth if it has a 5% chance of being transformational?
Because when the recipe is open and public, the product's success depends on Distribution (which has been cornered by MS, Google, Apple). This is good for the ecosystem but not sure how those particular VCs will get exits.
Very few startup products depend on distribution by Microsoft / Google / Apple. You're really just talking about a limited set of mobile or desktop apps there. Everything else is wide open. Kailera Therapeutics isn't going to live or die based on what the tech giants do.
Yes - I had similar thoughts when I saw the word "startup" used alongside something so far-out (same 'critique" should apply to Fei-Fei Li's World Labs - https://www.worldlabs.ai). These are VC-funded research labs (and there is nothing wrong with tat). Calling them "startups" as if they are already working on an MVP on top of an unproven (and frankly non-existent) technology seems a little disingenuous to me.
Yeah, that's quite unusual. Buisness was always terrible at being innovative, always dared to take only the safest and most minute of bets and the progress of technology was always paid for by the taxpayers. Business usually stepped in only later, when technology was ready and did what it does best, opimize manufacturing and put it in the hands of as many consumers as possible rakink in billions.
I wonder what changed. Does AI look like a safe bet? Or does every other bet seem to not have any reasonable return?
Making LeCun report to Wang was the most boneheaded move imaginable. But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.
In industry research, someone in a chief position like LeCun should know how to balance long-term research with short-term projects. However, for whatever reason, he consistently shows hostility toward LLMs and engineering projects, even though Llama and PyTorch are two of the most influential projects from Meta AI. His attitude doesn’t really match what is expected from a Chief position at a product company like Facebook. When Llama 4 got criticized, he distanced himself from the project, stating that he only leads FAIR and that the project falls under a different organization. That kind of attitude doesn’t seem suitable for the face of AI at the company. It's not a surprise that Zuck tried to demote him.
These are the types that want academic freedom in a cut-throat industry setup and conversely never fit into academia because their profiles and growth ambitions far exceed what an academic research lab can afford (barring some marquee names). It's an unfortunate paradox.
The Bell Labs we look back on was only the result of government intervention in the telecom monopoly. The 1956 consent decree forced Bell to license thousands of its patents, royalty free, to anyone who wanted to use them. Any patent not listed in the consent decree was to be licensed at "reasonable and nondiscriminatory rates."
The US government basically forced AT&T to use revenue from its monopoly to do fundamental research for the public good. Could the government do the same thing to our modern megacorps? Absolutely! Will it? I doubt it.
Used to be a Google X. Not sure at what scale it was.
But if any state/central bank was clever they would subsidize this.
That's a better trickle down strategy.
Until we get to agi and all new discoveries are autonomously led by AI that is :p
Appreciate you bringing up Bell Labs. So I decided to do a deep research[0] in Gemini[1] to understand why we don't have a Bell Labs like setup anymore.
Before I present my simple-minded takeaway below, I am happy to be schooled on how research labs in mega corporations really work and what their respective business models look like.
Seems like a research powerhouse like the Bell Labs can thrive for a while only if the parent company (like a pre-1984 AT&T) is massively monopolistic and have unbounded discretionary research budget.
One can say, Alphabet is the only company that is comparable today where such an arrangement can survive but I believe it would still dwarf in comparison to what the original Bell Labs used to be. I also think NEC labs went in the same direction [2].
I became interested in the matter reading this thread and vaguely remember reading a couple of the articles. Saved them all in NotebookLM to get an audio overview and to read later. Thanks!
I always take a bird's eye kind of view on things like that, because however close I get, it always loops around to make no sense.
> is massively monopolistic and have unbounded discretionary research budget
that is the case for most megacorps. if you look at all the financial instruments.
modern monopolies are not equal to single corporation domination. modern monopolies are portfolios who do business using the same methods and strategies.
the problem is that private interests strive mostly for control, not money or progress. if they have to spend a lot of money to stay in control of (their (share of the)) segments, they will do that, which is why stuff like the current graph of investments of, by and for AI companies and the industries works.
A modern equivalent and "breadth" of a Bell Labs (et. al) kind of R&D speed could not be controlled and would 100% result in actual Artificial Intelligence vs all those white labelababbebel (sry) AI toys we get now.
Post WW I and II "business psychology" have build a culture that cannot thrive in a free world (free as in undisturbed and left to all devices available) for a variety of reasons, but mostly because of elements with a medieval/dark-age kind of aggressive tendency to come to power and maintain it that way.
In other words: not having a Bell Labs kind of setup anymore ensures that the variety of approaches taken on large scales aka industry-wide or systemic, remains narrow enough.
Google Deepmind is the closest lab to that idea because Google is the only entity that is big enough to get close to the scale of AT&T. I was skeptical that the Deepmind and Google Brain merge would be successful but it seems to have worked surprisingly well. They are killing it with LLMs and image editing models. They are also backing the fastest growing cloud business in the world and collecting Nobel prizes along the way.
It seems DeepMind is the closest thing to a well funded blue-sky AI research group, even despite the merger with Google Brain and now more of a product focus.
Man, why did no one tell the people who invented bronze that they weren’t allowed to do it until they had a correct definition for metals and understood how they worked? I guess the person saying something can’t be done should stay out of the way of the people doing it.
I'm not sure what 'inventing bronze' is supposed to be. 'Inventing' AGI is pretty much equivalent to creating new life, from scratch. And we don't have an idea on how to do that either, or how life came to be.
>> I guess the person saying something can’t be done should stay out of the way of the people doing it.
I'll happily step out of the way once someone simply tells me what it is you're trying to accomplish. Until you can actually define it, you can't do "it".
The big tech companies are trying to make machines that replace all human labor. They call it artificial intelligence. Feel free to argue about definitions.
Intelligence and human health can't be defined neatly. They are what we call suitcase words. If there exists a physiological tradeoff between medical research about whether to live till 500 years or to be able to lift 1000kg when a person is in youth, those are different dimensions / directions across we can make progress. Same happens for intelligence. I think we are on right track.
I don't think the bar exam is scientifically designed to measure intelligence so that was an odd example. Citing the bar exam is like saying it passes the "Game of thrones trivia" exam so it must be intelligent.
As for IQ tests and the like, to the extent they are "scientific" they are designed based on empirical observations of humans. It is not designed to measure the intelligence of a statistical system containing a compressed version of the internet.
Or does this just prove lawyers are artificially intelligent?
yes, a glib response, but think about it: we define an intelligence test for humans, which by definition is an artificial construct. If we then get a computer to do well on the test we haven't proved it's on par with human intelligence, just that both meet some of the markers that the test makers are using as rough proxies for human intelligence. Maybe this helps signal or judge if AI is a useful tool for specific problems, but it doesn't mean AGI
Hi there! :) Just wanted to gently flag that one of the terms (beginning with the letter "r") in your comment isn't really aligned with the kind of inclusive language we try to encourage across the community. Totally understand it was likely unintentional - happens to all of us! Going forward, it'd be great to keep things phrased in a way that ensures everyone feels welcome and respected. Thanks so much for taking the time to share your thoughts here!
> A pipe dream sustaining the biggest stock market bubble in history
This is why we're losing innovation.
Look at electric cars, batteries, solar panels, rare earths and many more. Bubble or struggle for survival? Right, because if US has no AI the world will have no AI? That's the real bubble - being stuck in an ancient world view.
Meta's stock has already tanked for "over" investing in AI. Bubble, where?
> 2 Trillion dollars in Capex to get code generators with hallucinations
You assume that's the only use of it.
And are people not using these code generators?
Is this an issue with a lost generation that forgot what Capex is? We've moved from Capex to Opex and now the notion is lost, is it? You can hire an army of software developers but can't build hardware.
Is it better when everyone buys DeepSeek or a non-US version? Well then you don't need to spend Capex but you won't have revenue either.
And that $2T you're referring to includes infrastructure like energy, data centers, servers and many things. DeepSeek rents from others. Someone is paying.
More importantly even if you do want it, and there are business situations that support your ambitions. You still have to do get into the managerial powerplay, which quite honestly takes a separate kind of skill set, time and effort. Which Im guessing the academia oriented people aren't willing to do.
Its pretty much dog eat dog at top management positions.
Its not exactly a space for free thinking timelines.
It is not a free thinking paradise in academia either. Different groups fighting for hiring, promotions and influence exist there, too. And it tends to be more pronounced: it is much easier in industry to find a comparable job to escape a toxic environment, so a lot of problems in academia settings steam forever.
But the skill sets to avoid and survive personnel issues in academia is different from industry. My 2c.
> Its not exactly a space for free thinking timelines.
Same goes for academia. People's visions compete for other people's financial budgets, time and other resources. Some dogs get to eat, study, train at the frontier and with top tools in top environments while the others hope to find a good enough shelter.
It's very hard (and almost irreconcilable) to lead both Applied Research -- that optimizes for product/business outcomes -- and Fundamental Research -- that optimizes for novel ideas -- especially at the scale of Meta.
LeCun had chosen to focus on the latter. He can't be blamed for not having taken the second hat.
I would pose a question differently, under his leadership did Meta achieve good outcome?
If the answer is yes, then better to keep him, because he has already proved himself and you can win in the long-term. With Meta's pockets, you can always create a new department specifically for short-term projects.
If the answer is no, then nothing to discuss here.
Meta did exactly that, kept him but reduced his scope. Did the broader research community benefit from his research? Absolutely. But did Meta achieve a good outcome? Probably not.
If you follow LeCun on social media, you can see that the way FAIR’s results are assessed is very narrow-minded and still follows the academic mindset. He mentioned that his research is evaluated by: "Research evaluation is a difficult task because the product impact may occur years (sometimes decades) after the work. For that reason, evaluation must often rely on the collective opinion of the research community through proxies such as publications, citations, invited talks, awards, etc."
But as an industry researcher, he should know how his research fits with the company vision and be able to assess that easily. If the company's vision is to be the leader in AI, then as of now, he seems to have failed that objective, even though he has been at Meta for more than 10 years.
Also he always sounds like "I know this will not work". Dude are you a researcher? You're supposed to experiment and follow the results. That's what separates you from oracles and freaking philosophers or whatever.
If academia is in question, then so are their titles.
When I see "PhD", I read "we decided that he was at least good enough for the cause" PhD, or PhD (he fulfilled the criteria).
He's speaking to the entire feedforward Transformer-based paradigm. He sees little point in continuing to try to squeeze more blood out of that stone and instead move on to more appropriate ways to model ontologies per se rather than the crude-for-what-we-use-them-for embedding-based methods that are popular today.
I really resonate with his view due to my background in physics and information theory. I for one welcome his new experimentation in other realms while so many still hack away at their LLMs in pursuit of SOTA benchmarks.
If the LLM hype doesn't cool down fast, we're probably looking at another AI winter. Appears to me like he's just trying to ensure he'll have funding for chasing the global maximum going forward.
> If the LLM hype doesn't cool down fast, we're probably looking at another AI winter.
Is the real bubble ignorance? Maybe you'll cool down but the rest of the world? There will just be more DeepSeek and more advances until the US loses its standing.
This is the right take. He is obviously a pioneer and much more knowledgeable than Wang in the field, but if you don't have the product mind to serve company's business interest in short term and long term capacity anymore, you may as well stay in academia and be your own research director, let alone a chief executive in one of the largest public companies
Meta had a two prong AI approach - product-focused group working on LLMs, and blue-sky research (FAIR) working on alternate approaches, such as LeCun's JEPA.
It seems they've given up on the research and are now doubling down on LLMs.
None of Meta's revenue has anything to do with AI at all. (Other than GenAI slop in old people's feeds.) Meta is in the strange position of investing very heavily in multiple fields where they have no successful product: VR, hardware devices, and now AI. Ad revenue funds it all.
LeCun truly believes the future is in world models. He’s not alone. Good for him to now be in the position he’s always wanted and hopefully prove out what he constantly talks about.
He seems stuck in the GOFAI development philosophy where they just decide humans have something called a "world model" because they said so, and then decide that if they just develop some random thing and call it a "world model" it'll create intelligence because it has the same name as the thing they made up.
And of course it doesn't work. Humans don't have world models. There's no such thing as a world model!
LLM hostility was warrented. The overhype/downright charlartan nature of ai hype and marketing threatens another AI winter. It happened to cybernetics, it'll happen to us too. The finance folks will be fine, they'll move to the next big thing to overhype, it is the researchers who suffer the fall-out. I am considered anti LLM (transformers anyway) for this reason, i like the the architecture, it is cool amd rather capable at its problem set, which is a unique set, but, it isnt going to deliver any of what has been promised, any more than a plain DNN or a CNN will.
Yann was in charge of FAIR which has nothing to do with llama4 or the product focussed AI orgs. In general your comment is filled with misrepresentations. Sad.
I totally agree. He appeared to act against his employer and actively undermined Meta's effort to attract talent by his behavior visible on X.
And I stopped reading him, since he - in my opinion - trashed on autopilot everything 99% did - and these 99% were already beyond the two standard deviation of greatness.
It is even more highly problematic if you have absolutely no results eg products to back your claims.
tbf, transformers from more of a developmental perspective are hugely wasteful. they're long-range stable sure, but the whole training process requires so much power/data compared to even slightly simpler model designs I can see why people are drawn to alternative complex model designs down-playing the reliance on pure attention.
Yeah I think LeCun is underestimating the impact that LLM's and Diffusion models are going to have, even considering the huge impact they're already having. That's no problem as I'm sure whatever LeCun is working on is going to be amazing as well, but an enterprise like Facebook can't have their top researcher work on risky things when there's surefire paths to success still available.
I politely disagree - it is exactly an industry researcher's purpose to do the risky things that may not work, simply because the rest of the corporation cannot take such risks but must walk on more well-trodden paths.
Corporate R&D teams are there to absorb risk, innovate, disrupt, create new fields, not for doing small incremental improvements. "If we know it works, it's not research." (Albert Einstein)
I also agree with LeCun that LLMs in their current form - are a dead end. Note that this does not mean that I think we have already exploited LLMs to the limit, we are still at the beginning. We also need to create an ecosystem in which they can operate well: for instance, to combine LLMs with Web agents better we need a scalable "C2B2C" (customer delegated to business to business) micropayment infrastructure, because as these systems have already begun talking to each other, in the longer run nobody would offer their APIs for free.
I work on spatial/geographic models, inter alia, which by coincident is one of the direction mentioned in the LeCun article. I do not know what his reasoning is, but mine was/is: LMs are language models, and should (only) be used as such. We need other models - in particular a knowledge model (KM/KB) to cleanly separate knowledge from text generation - it looks to me right now that only that will solve hallucination.
Knowledge models, like ontologies, always seem suspect to me; like they promise a schema for crisp binary facts, when the world is full of probabilistic and fuzzy information loosely categorized by fallible humans based on an ever slowly shifting social consensus.
Everything from the sorites paradox to leaky abstractions; everything real defies precise definition when you look closely at it, and when you try to abstract over it, to chunk up, the details have an annoying way of making themselves visible again.
You can get purity in mathematical models, and in information systems, but those imperfectly model the world and continually need to be updated, refactored, and rewritten as they decay and diverge from reality.
These things are best used as tools by something similar to LLMs, models to be used, built and discarded as needed, but never a ground source of truth.
>Knowledge models, like ontologies, always seem suspect to me; like they promise a schema for crisp binary facts, when the world is full of probabilistic and fuzzy information loosely categorized by fallible humans based on an ever slowly shifting social consensus.
I don't disagree that the world is full of fuzziness. But the problem I have with this portrayal is that formal models are often normative rather than analytical. They create reality rather than being an interpretation or abstraction of reality.
People may well have a fuzzy idea of how their credit card works, but how it really works is formally defined by financial institutions. And this is not just true for software products. It's also largely true for manufactured products. Our world is very much shaped by artifacts and man-made rules.
Our probabilistic, fuzzy concepts are often simply a misconception. That doesn't mean it's not important of course. It is important for an AI to understand how people talk about things even if their idea of how these things work is flawed.
And then there is the sort of semi-formal language used in legal or scientific contexts that often has to be translated into formal models before it can become effective. Law makers almost never write algorithms (when they do, they are often buggy). But tax authorities and accounting software vendors do have to formally model the language in the law and then potentially change those formal definitions after court decisions.
My point is that the way in which the modeled, formal world interacts with probabilistic, fuzzy language and human actions is complex. In my opinion we will always need both. AIs ultimately need to understand both and be able to combine them just like (competent) humans do. AI "tool use" is a stop-gap. It's not a sufficient level of understanding.
> People may well have a fuzzy idea of how their credit card works, but how it really works is formally defined by financial institutions.
> Our probabilistic, fuzzy concepts are often simply a misconception.
How eg a credit card works today is defined by financial institutions. How it might work tomorrow is defined by politics, incentives, and human action. It's not clear how to model those with formal language.
I think most systems we interact with are fuzzy because they are in a continual state of change due to the aforementioned human society factors.
To some degree I think that our widely used formal languages may just be insufficient and could be improved to better describe change.
But ultimately I agree with you that this entire societal process is just categorically different. It's simply not a description or definition of something, and therefore the question of how formal it can be doesn't really make sense.
Formalisms are tools for a specific but limited purpose. I think we need those tools. Trying to replace them with something fuzzy makes no sense to me either.
Is it that fuzzy though? If it was would language not adequately grasp and model our realities? And what about the physical world itself: animals are modeling the world adequately enough to navigate it. There's significant gains to make from modeling _enough_ of the world, without falling into hallucinations of purely statistical associations of an LLM.
World models are trivial. eg narratives are world models and they provide only pre frontal simulation, ie they are synthetically prey-predation.
No animal uses world models for survival and doubtful they exist (maps are not models), a world model doesn't conform to optic flow, ie instantaneous use and response. Anything like a world model isn't shallow, the basic premise of oscillatory command, it's needlessly deep, nothing like brains. This is just a frontier hail-mary to the current age.
You're basically describing the knowledge problem vs model structure, how to even begin to design a system which self-updates/dynamically-learns vs being trained and deployed.
Cracking that is a huge step, pure multi-modal trained models will probably give us a hint, but I think we're some ways from seeing a pure multi-modal open model which can be pulled apart/modified. Even then they're still train and deploy not dynamically learning.
I worry we're just going to see LSTM design bolted onto deep LLM because we don't know where else to go and it will be fragile and take eons to train.
And less said about the crap of "but inference is doing some kind of minimization within the context window" the better, it's vacuous and not where great minds should be looking for a step forwards.
I have vague notions of there being an entire hidden philosophical/political battlefield (massacre?) behind the whole "are knowledge models/ontologies a realistic goal" debate.
Starting with the sophomoric questions of the optimist who mistakes the possible for the viable: how definite of a thing is "the world", how knowable is it, what is even knowledge... and then back through the more pragmatic: by whom is it knowable, to what degree, and by what means. The mystics: is "the world" the same thing as "the sum of information about the world"? The spooks: how does one study those fields of information which are already agentic and actively resist being studied by changing themselves, such as easily emerge anywhere more than n(D) people gather?
Plenty of food for thought from why ontologies are/aren't a thing. The classical example of how this plays out in the market being search engines winning over internet directories. But that's one turn of the wheel. Look at what search engines grew into quarter century later. What their outgrowths are doing to people's attitude towards knowledge. Different timescale, different picture.
Fundamentally, I don't think human language has sufficient resolution to model large spans of reality within the limited human attention span. The physical limits of human language as information processing device have been hit at some point in the XX century. Probably that 1970s divergence between productivity and wages.
So while LLMs are "computers speak language now" and it's amazing if sad that they cracked it by more data and not by more model, what's more amazing is how many people are continually ready to mistake language for thought. Are they all P-zombies or just obedience-conditioned into emulating ones?!?!?
Practically, what we lack is not the right architecture for "big knowing machine", but better tools for ad-hoc conceptual modeling of local situations. And, just like poetry that rhymes, this is exactly what nobody has a smidgen of interest to serve to consumers, thus someone will just build it in their basement in the hope of turning the tables on everyone. Probably with the help of LLMs as search engines and code generators. Yall better hurry. They're almost done.
Nice commentary and I enjoyed the poetic turn of phrase. I had to respond to it with my own thoughts if only to bookmark it for myself.
> how many people are continually ready to mistake language for thought
This is a fundamental illusion - where, rote memory and names and words get mistaken for understanding. This was wonderfully illustrated here [1]. Few really grok what understanding actually is. This is an unfortunate by-product of the education system we have.
> Are they all P-zombies or just obedience-conditioned into emulating ones?!?!?
Brilliant way to state the fundamental human condition. ie, we are all zombies obedience conditioned to imitate rather than understand. Social media amplifies the zombification, and now LLMs too.
> Starting with the sophomoric questions of the optimist who mistakes the possible for the viable
This is the fundamental tension between operationalized meaning and imagination wherein a grokking soul gathers mists from the cosmic chaos and creates meaning and then continually adapts it.
> it's amazing if sad that they cracked it by more data and not by more model
I was speaking to experts in the sciences (chemistry) and they were shocked that the underlying architecture is brute force, and they expected some theory which turns out to be compact and information compressed not by brute-force but by theorization.
> The physical limits of human language as information processing device have been hit at some point in the XX century
2000 years back when humans realized that formalism was needed to operationalize meaning, and natural language was too vague to capture and communicate it. Only because the world model that natural language captures encompasses "everything" whereas for making it "useful" requires to limit the world model via formalism.
> it is exactly a researcher's purpose to do the risky things that may not work
Maybe at university, but not at a trillion dollar company. That job as chief scientist is leading risky things that will work to please the shareholders.
They knew what Yann LeCun was when they hired him. If anything, those brilliant academics who have done what they're told and loyally pursued corporate objectives the way the corporation wanted (e.g. Karpathy when he was at Tesla) haven't had great success either.
>They knew what Yann LeCun was when they hired him.
Yes but he was hired in the ZIRP era where all SV companies were hiring every opinionated academic and giving them free reign and unlimited money to burn in the hopes that maybe they'll create the next big thing for them eventually.
These are very different economic times right now, after the FED infinite money glitch has been patched out, so now people do need to adjust to them and start actually making some products of value for their seven figure costs to their employers, or end up being shown the door.
LLMs and Diffusion solve a completely different problem than world models.
If you want to predict future text, you use an LLM. If you want to predict future frames in a video, you go with Diffusion. But what both of them lack is object permanence. If a car isn't visible in the input frame, it won't be visible in the output. But in the real world, there are A LOT of things that are invisible (image) or not mentioned but only implied (text) that still strongly affect the future. Every kid knows that when you roll a marble behind your hand, it'll come out on the other side. But LLMs and Diffusion models routinely fail to predict that, as for them the object disappears when it stops being visible.
Based on what I heard from others, world models are considered the missing ingredient for useful robots and self-driving cars. If that's halfway accurate, it would make sense to pour A LOT of money into world models, because they will unlock high-value products.
Sure, if you only consider the model they have no object permanence. However you can just put your model in a loop, and feed the previous frame into the next frame. This is what LLM agent engineers do with their context histories, and it's probably also what the diffusion engineers do with their video models.
Messing with the logic in the loop and combining models has an enormous potential, but it's more engineering than researching, and it's just not the sort of work that LeCun is interested in. I think the conflict lies there, that Facebook is an engineering company, and a possible future of AI lies in AI engineering rather than AI research.
This is something that was true last year, but hanging on by a thread this year. Genie shows this off really well, but it's also in the video models as well.[1]
I think World models is way to go for Super Intelligence. One of teh patent i saw already going in this direction for Autonomous mobility is https://patents.google.com/patent/EP4379577A1 where synthetic data generation (visualization) is missing step in terms of our human intelligence.
This is the first time I have heard of world models. Based on my brief reading it does look like this is the idea model for autonomous driving. I wonder if the self driving companies are already using this architecture or something close to it.
I thoroughly disagree, I believe world models will be critical in some aspect for text generation too. A predictive world model you can help to validate your token prediction. Take a look at the Code World Model for example.
> I think LeCun is underestimating the impact that LLM's and Diffusion models
No, I think hes suggesting that "world models" are more impactful. The issue for him inside meta is that there is already a research group looking at that, and are wildly more successful (in terms of getting research to product) and way fucking cheaper to run than FAIR.
Also LeCun is stuck weirdly in product land, rather than research (RL-R) which means he's not got the protection of Abrash to isolate him from the industrial stupidity that is the product council.
> Facebook can't have their top researcher work on risky things when there's surefire paths to success still available.
How did you determine that "surefire paths to success still available"? Most academics agree that LLMs (or LLMs alone) are not going to lead us to AGI. How are you so certain?
I don't believe we need more academic research to achieve AGI. The sort of applications that are solving the recent AGI challenges are just severely resource constrained AGI. The only difference between those systems and human intelligence are resources and incentives.
Not that I believe AGI is the measure of success, there's probably much more efficient ways to achieve company goals than simulating humans.
not sure I agree. AI seems to be following the same 3-stage path of many inventions: innovation > adoption > diffusion. LeCun and co focus on the first, and LLMs in their current form appear to be incremental at improvements; we're still using the same basis from more than ten years ago. FB and industry are signalling a focus on harvesting the innovation and that could last - but also take - many years or decades. Your fundamental researchers are not interested (or the right people) in that position.
In the software development world yes, outside of that, virtually none. Yes, you can transcribe a video call in Office, yes, but that's not ground breaking. I dare you to list 10 impacts on different fields, excluding tech and including at least half blue collar fields and at least half white collar fields , at different levels from the lowest to the highest in the company hierarchy, that LLM/Diffusion models are having. Impact here specifically means a significant reduction of costs or a significant increase of revenue. Go on
I'm also not sure it even drives a ton of value in software engineering. It makes the easy part easier and the hard part harder. Typing out software in your mind was never the difficult part. Figuring out what to write, how to interpret specs in context, how to make your code work within the context of a broader whole, how to be extensible, maintainable, reliable, etc. That's hard, and LLMs really don't help.
Even when writing, it shifts the mental burden from an easy thing (writing code) to a very hard thing (reading that code, validating it's right, hallucination free, and then refactoring it to match your teams code style and patterns).
It's great for building a first-order approximation of a tech demo app that you then throw out and build from scratch, and auto-complete. In my experience, anyways. I'm sure others have had different experiences.
You already mentioned two fields they have a huge impact on, software development and NLP (this latter one the most impacted so far). Another field that comes to mind is academic research is getting an important boost as well, via semantic search or more advanced stuff like Google's biological cell model which already uncovered new treatments. I'm sure I'm missing a lot of other fields I'm less familiar with (legal, for example). But just these impacts I listed are all huge and they will indirectly have a huge impact on all other areas of human industry, it's just a matter of time. "Software will eat the world" and all that.
Personally, I find myself using LLMs more than Google now, even for non-development tasks. I think this shift is going to become the new normal (if it isn't already).
And what's the end result? All one can see is just bigger representation of those who confidently subscribe to false information and become arrogant when their validity is questioned, as the LLM writing style has convinced them it's some sort of authority. Even people on this website are so misinformed to believe that ChatGPT has developed its own reasoning, despite it being at the core an advanced learning algorithm trained on a enormous amount of human generated data.
And let's not speak about those so deep into sloth that put it into use to deteriorate, and not augment as they claim to do, humane creative recreational activities.
While I agree with your point, “Superintelligence” is a far cry from what Meta will end up delivering with Wang in charge. I suppose that, at the end of the day, it’s all marketing. What else should we expect from an ads company :?
The last time LeCun disagreed with the AI mainstream was when he kept working on neural net when everyone thought it was a dead end. He might be entirely right in his LLM scepticism. It's hardly a surefire path. He didn't prevent Meta from working on LLM anyway.
The issue is more than his position is not compatible with short term investors expectations and that's fatal in a company like Meta at the position LeCun occupies.
Do you? Or is it possible to acknowledge a plateau in innovation without necessarily having an immediate solution cooked-up and ready to go?
Are all critiques of the obvious decline in physical durability of American-made products invalid unless they figure out a solution to the problem? Or may critics of a subject exist without necessarily being accredited engineers themselves?
LLM's are probably always going to be the fundamental interface, the problem they solved was related to the flexibility of human languages allowing us to have decent mimikry's.
And while we've been able to approximate the world behind the words, it's just full of hallucinations because the AI's lack axiomatic systems beyond much manually constructed machinery.
You can probably expand the capabilties by attaching to the front-end but I suspect that Yann is seeing limits to this and wants to go back and build up from the back-end of world reasoning and then _among other things_ attach LLM's at the front-end (but maybe on equal terms with vision models that allows for seamless integration of LLM interfacing _combined_ with vision for proper autonomous systems).
> because the AI's lack axiomatic systems beyond much manually constructed machinery.
Oh god, that is massively under-selling their learning ability. These models are able to extract and reply with why jokes are funny without even knowing basic vocab, yet there are pure-code models out there with lingual rules baked in from day one which still struggle with basic grammar.
The _point_ of LLMs arguably is there ability to learn any pattern thrown at it with enough compute.
With an exception to learning how logical processes work, and pure LLMs only see "time" in the sense of a paragraph begins and ends.
At the least they have taught computers, "how to language", which in regards to how to interact with a machine is a _huge_ step forward.
Unfortunately the financial incentives are split between agentic model usage (taking the idea of a computerised butler further), maximizing model memory and raw learning capacity (answering all problems at any time), and long-range consistency (longer ranges give better stable results due to a few reasons, but we're some way from seeing an LLM with a 128k experts and 10e18 active tokens).
I think in terms of building the perfect monkey butler we already have most or all of the parts. With regard to a model which can dynamically learn on the fly... LLMs are not the end of the story and we need something to allow the models to more closely tie their LS with the context. Frankly the fact that DeepSeek gave us an LLM with LS was a huge leap since previous model attempts had been overly complex and had failed in training.
I agree. I never understood LeCun's statement that we need to pivot toward the visual aspects of things because the bitrate of text is low while visual input through the eye is high.
Text and languages contain structured information and encode a lot of real-world complexity (or it's "modelling" that).
Not saying we won't pivot to visual data or world simulations, but he was clearly not the type of person to compete with other LLM research labs, nor did he propose any alternative that could be used to create something interesting for end-users.
The issue is context. trying to make an AI assistant with just text only inputs is doeable but limiting. You need to know the _context_ of all the data, and without visual input most of it is useful.
For example "Where is the other half of this" is almost impossible to solve unless you have an idea of what "this" is.
but to do that you need to have cameras, to use cameras you need to have position, object, and people tracking. And that is a hard problem thats not solved.
the hypothesis is that "world models" solve that with an implicit understanding of the worl and the objects in context
Text and language contain only approximate information filtered through humans eyes and brains. Also animals don't have language and can show quite advanced capabilities compared to what we can currently do in robotics. And if you do enough mindfulness you can dissociate cognition/consciousness from language. I think we are lured because how important language is for us humans, but intuitively it's obvious to me language (and LLMs) are only a subcomponent, or even irrelevant for say self driving or robotics.
If LeCun's research has made Meta a powerhouse of video generation or general purpose robotics - the two promising directions that benefit from working with visual I/O and world modeling as LeCun sees it - it could have been a justified detour.
LLMs get results is quite the bold statement.
If they get results, they should be getting adopted, and they should be making money. This is all built on hazy promises.
If you had marketable results, you wouldn't have to hide 20+ billion dollars of debt financing into an obscure SPV.
LLMs are the most baffling piece of tech. They are incredible, and yet marred by their non-deterministic hallucinatory nature, and bound to fail in adoption unless you convince everyone that they don't need precision and accuracy, but they can do their business at 75% quality, just with less human overhead.
It's quite the thing to convince people of, and that's why it needs the spend it's needing. A lot of we-need-to-stay-in-the-loop CEOs and bigwigs got infatuated with the idea, and most probably they just had their companies get addicted to the tech equivalent of crack cocaine.
A reckoning is coming.
LLMs get results, yes. They are getting adopted, and they are making money.
Frontier models are all profitable. Inference is sold with a damn good margin, and the amounts of inference AI companies sell keeps rising. This necessitates putting more and more money into infrastructure. AI R&D is extremely expensive too, and this necessitates even more spending.
A mistake I see people make over and over again is keeping track of the spending but overlooking the revenue altogether. Which sure is weird: you don't get from $0B in revenue to $12B in revenue in a few years by not having a product anyone wants to buy.
And I find all the talk of "non-deterministic hallucinatory nature" to be overrated. Because humans suffer from all of that too, just less severely. On top of a number of other issues current AIs don't suffer from.
Nonetheless, we use human labor for things. All AI has to do is provide a "good enough" alternative, and it often does.
Dario Amodei from Anthropic has made the claim that if you looked at each model as a separate business, it would be profitable [1], i.e. each model brings in more revenue than the total of training + inference costs. It's only because you're simultaneously training the next generation of models, which are larger and more expensive to train, but aren't generating revenue yet, that the company as a whole loses money in a given year.
Now, it's not like he opened up Anthropic's books for an audit, so you don't necessarily have to trust him. But since he's the CEO of Anthropic, you do need to believe that either (a) what he is saying is roughly true or (b) he is making the sort of fraudulent statements that would probably get you sent to prison.
He's speaking in a purely hypothetical sense. The title of the video even makes sure to note "in this example". If it turned this wasn't true of anthropic, it certainly wouldn't be fraud.
You don't even need insider info - it lines up with external estimates.
We have estimates that range from 30% to 70% gross margin on API LLM inference prices at major labs, 50% middle road. 10% to 80% gross margin on user-facing subscription services, error bars inflated massively. We also have many reports that inference compute has come to outmatch training run compute for frontier models by a factor of x10 or more over the lifetime of a model.
The only source of uncertainty is: how much inference do the free tier users consume? Which is something that the AI companies themselves control: they are in charge of which models they make available to the free users, and what the exact usage caps for free users are.
Adding that up? Frontier models are profitable.
This goes against the popular opinion, which is where the disbelief is coming from.
Note that I'm talking LLMs rather than things like image or video generation models, which may have vastly different economics.
> We also have many reports that inference compute has come to outmatch training run compute for frontier models by a factor of x10 or more over the lifetime of a model.
In this comment you proceeded to basically reinvent the meaning of "profitable company", but sure.
I won't even get into the point of comparing LLM to humans, because I choose not to engage with whoever doesn't have the human decency, humanistic compass, or basic phylosophical understanding of how putting LLMs and human labor on the same level to justify hallucinations and non-determinism is deranged and morally bankrupt.
OpenAI and Anthropic are making north of 4B/year revenue so some companies have figured out the money making part. ChatGPT has some 800M users according to some calculations. Whether it's enough money today, enough money tomorrow, is of course a question but there is a lot of money. Users would not use them in a scale if they do not solve their problems.
People used to say this about Amazon all the time. Remember how Amazon basically didn’t turn any real profits for 2 decades? The joke was that Amazon was a charitable organisation being funded by Wall Street for the benefit of human kind.
That didn’t last. People in the know knew that once you have a billion users and insane revenue and market power and have basically bought or driven out of business most of your competitors (Diapers.com, Jet.com, etc) you can eventually slow down your physical expansion, tighten the screws on your suppliers, increase efficiencies, and start printing money.
The VCs who are funding these companies are hoping that they have found the next Amazon. Many will probably go out of business, but some might join the ranks of trillion dollar companies.
If you hire a house cleaner to clean your house, and the cleaner didn't do well, would you eject yourself out of the house? You would not. You would change to a new cleaner.
But if we hire someone to deal on R&D to automate fully the house cleaning process, we might not necessarily expect the office to be maintained in clean state by the researchers themselves any time we enter the room.
I think he means Zuckerberg himself, the metaverse isn't exactly a major success, but this is a false equivalency the way he organized it only his vote matters he does what he wants
> But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.
When did they make groundbreaking foundation models though? DeepMind and OpenAI have done plenty of revolutionary things, what did Meta AI do while being led by LeCun?
You joke, but the Star Wars games - especially the pinball one, for me at least - are some of the best experiences available on Quest headsets. I've been playing software pinball (as well as the real thing) since the 80s, and this is one of my favorite ways to do it now, which I will keep coming back to.
I suppose they could solve superintelligence and cure cancer and build fusion reactors with it, but that's 100% outside their comfort zone - if they manage to build synthethic conversation partners and synthethic content generators as good or better than the real thing the value of having every other human on the planet registered to one of their social network goes to zero.
Which is impossible anyway - I facebook to maintain real human connections and keep up with people who I care about, not to consume infinite content.
At 1.6T market cap it's very hard to 10x or greater the company anymore doing what's in their comfort zone and they've got a lot of money to play with to find easier to grow opportunities. If Zuckerberg was convinced he could do that by selling toothpicks they'd have a go at the toothpick business. They went after the "metaverse" first, then AI. Both are just very fast growth options which happen to be tech focused because that's the only way you generate new comparable value as a company (unless you're sitting on a lot of state owned oil) in the current markets.
they are out for your clicks and attention minutes
if OpenAI can build a "social" network of completely generated content, that can kill Meta. Even today I venture to guess that most of the engagements in their platforms is not driven by real friends, so an AI driven platform won't be too different, or it might make content generation be so easy as to make your friends engage again.
Apart from it the ludicrous vision of the metaverse seems much more plausible with highly realistic world models
How do LLMs help with clicks and attention minutes? Why do they spend $100+B a year in AI capex, more than Google and Microsoft that actually rent AI compute to clients? What are they going to do with all that compute? It’s all so confusing
Browse TikTok and you already see AI generated videos popping up. Could well be that the platforms with the most captivating content will not be a "social" network but one consisting of some tailor made feed for you. That could undermine the business model of the existing social networks - unless they just fill it with AI generated content themselves. In other words: Facebook should really invest in good video generating models to keep their platforms ahead.
It might be just me, but in my opinion facebook platforms are way past the "content from your friends phase", but is full of cheap peddled viral content.
If that content becomes even cheaper, of higher quality and highly tailored to you, that is probably worth a lot of money, or at least worth not losing your entire company by a new competitor
But practically speaking, is Meta going to be generating text or video content itself? Are they going to offer some kind of creator tools so you can use it to create video as a user and they need the compute for that? Do they even have a video generation model?
The future is here folks, join us as we build this giant slop machine in order to sell new socks to boomers.
For all of your questions Meta would need a huge research/GPU investment, so that still holds.
In any case if I have to guess, we will see shallow things like the Sora app, a video generation tiktok social network and deeper integration like fake influencers, content generation that fits your preferences and ad publishers preferences
a more evil incarnation of this might be a social network where you aren't sure who is real and who isn't. This will probably be a natural evolution of the need to bootstrap a social network with people and replacing these with LLMs
Zuck did this on purpose, humiliating LeCun so he would leave.
Despite LeCun being proved wrong on LLMs capabilities such as reasoning, he remained extremely negative, not exactly inspiring leadership to the Meta Ai team, he had to go.
But LLMs still can't reason... in a reasonable sense. No matter how you look at it, it is still a statistical model that guesses next word, it doesn't think/reason per se.
No, it was because LeCun had no talent for running real life teams and was stuck in a weird place where he hated LLMs. He frankly was wasting Meta’s resources. And making him report to Wang was a way to force him out.
Zuckerberg knows what he wants but he rarely knows how to get it. That's been his problem all along. Unlike others he isn't scared to throw ridiculous amounts of money at a problem though and buy companies who do things he can't get done himself.
There's also the aspect of control - because of how the shares and ownership are organized he answers essentially to no one. In other companies burning this much cash as was with VR or now AI without any sensible results would get him ejected a long time ago.
It wasn’t boneheaded. It was done to make Yann leave. Meta doesn’t want Yann for good reason.
Yann was largely wrong about AI. Yann coined the term stochastic parrot and derrided LLMs as a dead end. It’s now utterly clear the amount of utility LLMs have and that whatever these LLMs are doing it is much more than stochastic parroting.
I wouldn’t give money to Yann, the guy is a stubborn idiot and closed minded. Whatever he’s doing wont even touch LLM technology. He was so publicly deriding LLMs I see no way he will back pedal from that.
I dont think LLMs are the end of the story for agi. But I think they are a stepping stone. Whatever agi is in the end, LLMs or something close to it will be a modular component of aspect of the final product. For LeCunn to dismiss even the possibility of this is idiotic. Horrible investment move to give money to Yann to likely pursue Agi without even considering LLMs.
Good. The world model is absolutely the right play in my opinion.
AI Agents like LLMs make great use of pre-computed information. Providing a comprehensive but efficient world model (one where more detail is available wherever one is paying more attention given a specific task) will definitely eke out new autonomous agents.
Swarms of these, acting in concert or with some hive mind, could be how we get to AGI.
I wish I could help, world models are something I am very passionate about.
A world model is a persistent representation of the world (however compressed) that is available to an AI for accessing and compute. For example, a weather world model would likely include things like wind speed, surface temperature, various atmospheric layers, total precipitable water, etc. Now suppose we provide a real time live feed to an AI like an LLM, allowing the LLM to have constant, up to date weather knowledge that it loads into context for every new query. This LLM should have a leg up in predictive power.
Some world models can also be updated by their respective AI agents, e.g. "I, Mr. Bot, have moved the ice cream into the freezer from the car" (thereby updating the state of freezer and car, by transferring ice cream from one to the other, and making that the context for future interactions).
One theory of how humans work is the so called predictive coding approach. Basically the theory assumes that human brains work similar to a kalman filter, that is, we have an internal model of the world that does a prediction of the world and then checks if the prediction is congruent with the observed changes in reality. Learning then comes down to minimizing the error between this internal model and the actual observations, this is sometimes called the free energy principle. Specifically when researchers are talking about world models they tend to refer to internal models that model the actual external world, that is they can predict what happens next based on input streams like vision.
Why is this idea of a world model helpful? Because it allows multiple interesting things, like predict what happens next, model counterfactuals (what would happen if I do X or don't do X) and many other things that tend to be needed for actual principled reasoning.
In this video we explore Predictive Coding – a biologically plausible alternative to the backpropagation algorithm, deriving it from first principles.
Predictive coding and Hebbian learning are interconnected learning mechanisms where Hebbian learning rules are used to implement the brain's predictive coding framework. Predictive coding models the brain as a hierarchical system that minimizes prediction errors by sending top-down predictions and bottom-up error signals, while Hebbian learning, often simplified as "neurons that fire together, wire together," provides a biologically plausible way to update the network's weights to improve predictions over time.
Learning from the real world, including how it responds to your own actions, is the only way to achieve real-world competency, intelligence, reasoning and creativity, including going beyond human intelligence.
The capabilities of LLMs are limited by what's in their training data. You can use all the tricks in the book to squeeze the most out of that - RL, synthetic data, agentic loops, tools, etc, but at the end of the day their core intelligence and understanding is limited by that data and their auto-regressive training. They are built for mimicry, not creativity and intelligence.
The way I think of it (might be wrong) but basically a model that has similar sensors to humans (eyes, ears) and has action-oriented outputs with some objective function (a goal to optimize against). I think autopilot is the closest to world models in that they have eyes, they have ability to interact with the world (go different directions) and see the response.
Training on 2,500 hours of prerecorded video of people playing Minecraft, they produce a neural net world model of Minecraft. It is basically a learned Minecraft simulator. You can actually play Minecraft in it, with some limitations.
They then train a neural net policy to play Minecraft all the way up to obtaining diamonds. But the policy never actually plays the real game of Minecraft during training. It only plays in the world model. The entire policy is trained in its own imagination. Of course this is why it is called Dreamer.
The advantage of this is that no extra real data is required to train policies. The only input to the system is a relatively small dataset of prerecorded video of people playing Minecraft, and the output is a policy that can achieve specific goals in the world. Traditionally this would require many orders of magnitude more real data to achieve, and the real data would need to be focused on the specific goals you want the policy to achieve. World models are a great way to amplify a small amount of undifferentiated real data into a large amount of goal-directed synthetic data. This is very appealing for domains where it is expensive to gather real data, like robotics. I recommend listening to the interview above if you want to know more.
He is one of these people who think that humans have a direct experience of reality not mediated by as Alan Kay put it three pounds of oatmeal. So he thinks a language model can not be a world model. Despite our own contact with reality being mediated through a myriad of filters and fun house mirror distortions. Our vision transposes left and right and delivers images to our nerves upside down, for gawd’s sake. He imagines none of that is the case and that if only he can build computers more like us then they will be in direct contact with the world and then he can (he thinks) make a model that is better at understanding the world
Isn't this idea demonstrably false due to the existence of various sensory disorders too?
I have a disorder characterised by the brain failing to filter own its own sensory noise, my vision is full of analogue TV-like distortion and other artefacts. Sometimes when it's bad I can see my brain constructing an image in real time rather than this perception happening instantaneously, particularly when I'm out walking. A deer becomes a bundle of sticks becomes a muddy pile of rocks (what it actually is) for example over the space of seconds. This to me is pretty strong evidence we do not experience reality directly, and instead construct our perceptions predictively from whatever is to hand.
The default philosophical position for human biology and psychology is known as Representational Realism. That is, reality as we know it is mediated by changes and transformations made to sensory (and other) input data in a complex process, and is changed sufficiently to be something "different enough" from what we know to be actually real.
Direct Realism is the idea that reality is directly available to us and any intermediate transformations made by our brains is not enough to change the dial.
Direct Realism has long been refuted. There are a number of examples, e.g. the hot and cold bucket; the straw in a glass; rainbows and other epiphenomena, etc.
Pleased to meet someone else who suffers from "visual snow". I'm fortunate in that like my tinnitus, I'm only acutely aware of it when I'm reminded of it, or, less frequently, when it's more pronounced.
You're quite correct that our "reality" is in part constructed. The Flashed Face Distortion Effect [0][1] (wherein faces in the peripheral vision appear distorted due the the brain filling in the missing information with what was there previously) is just one example.
Only tangentially related but maybe interesting to someone here so linking anyways: Brian Kohberger is a visual snow sufferer. Reading about his background was my first exposure to this relatively underpublicized phenomenon.
Ah that's interesting, mine is omnipresent and occasionally bad enough I have to take days off work as I can't read my own code; it's like there's a baseline of it that occasionally flares up at random. Were you born with visual snow or did you acquire it later in life? I developed it as a teenager, and it was worsened significantly after a fever when I was a fresher.
Also do you get comorbid headaches with yours out of interest?
I developed it later in life. The tinnitus came earlier (and isn't as a result of excessive sound exposure as far as I know), but in my (unscientific) opinion they are different manifestations (symptoms) of the same underlying issue – a missing or faulty noise filter on sensory inputs to the brain.
Thankfully I don't get comorbid headaches – in fact I seldom get headaches at all. And even on the odd occasion that I do, they're mild and short-lived (like minutes). I don't recall ever having a headache that was severe, or that lasted any length of time.
Yours does sound much more extreme than mine, in that mine is in no way debilitating. It's more just frustrating that it exists at all, and that it isn't more widely recognised and researched. I have yet to meet an optician that seems entirely convinced that it's even a real phenomenon.
Interesting, definitely agree it likely shares an underlying cause with tinnitus. It's also linked to migraine and was sometimes conflated with unusual forms of migraine in the past, although it's since been found to be a distinct disorder. There's been a few studies done on visual snow patients, including a 2023 fMRI study which implicated regions rich in glutamate and 5HT2A receptors.
I actually suspected 5HT2A might be involved before that study came out, since my visual distortions sometimes resemble those caused by psychedelics. It's also known that both psychedelics and anecdotally from patient's groups SSRIs too can cause a similar symptoms to visual snow syndrome, I had a bad experience with SSRIs for example but serotonin antagonists actually fixed my vision temporarily - albeit with intolerable side-effects so I had to stop.
It's definitely a bit of a faff that people have never heard of it, I had to see a neuro-ophthalmologist and a migraine specialist to get a diagnosis. On the other hand being relatively unknown does mean doctors can be willing to experiment. My headaches at least are controlled well these days.
the fact that a not-so-direct experience of reality produces "good enough results" (eg. human intelligence) doesn't mean that a more-direct experience of reality won't produce much better results, and it clearly doesn't mean it can't produce these better results in AI
your whole reasoning is neither here not there, and attacking a straw man - YLC for sure knows that human experience of reality is heavily modified and distorted
but he also knows, and I'd bet he's very right on this, that we don't "sip reality through a narrow straw of tokens/words", and that we don't learn "just from our/approved written down notes", and only under very specific and expensive circumstances (training runs)
anything closer to more-direct-world-models (as LLMs are ofc at a very indirect level world models) has very high likelihood of yielding lots of benefits
The world model of a language model is a ... language model. Imagine the mind of a blind limbless person, locked in a cell their whole life, never having experienced anything different, who just listens all day to a piped in feed of randomized snippets of WikiPedia, 4chan and math olypiad problems.
The mental model this person has of this feed of words is what an LLM at best has (but human model likely much richer since they have a brain, not just a transformer). No real-world experience or grounding, therefore no real-world model. The only model they have is of the world they have experience with - a world of words.
> Swarms of these, acting in concert or with some hive mind, could be how we get to AGI.
There's absolutely no reason to think this. In fact, all of the evidence we have to this point suggests that scaling intelligence horizontally doesn't increase capabilities – you have to scale vertically.
Additionally, as it stands I'd argue there's foundational architectural advancements needed before artificial neutral networks can learn and reason at the same level (or better) than humans across a wide variety of tasks. I suspect when we solve this for LLMs the same techniques could be applied to world models. Fundamentally, the question to ask here is whether AGI is io dependant, and I see no reason to believe this to be the case – if someone removes your eyes and cuts off your hands they don't make you any less generally intelligent.
LeCun, who's been saying LLMs are a dead end for years, is finally putting his money where his mouth is. Watch for LeCun to raise an absolutely massive VC round.
Pretty funny post. He won't be held responsible for any failures. Worst case scenario for this guy is he hires a bunch of people, the company folds some time later, his employees take the responsibility by getting fired, and he sails into the sunset on several yachts.
So he's not using his own money, and he has enough personal wealth that there is no impact to him if the company fails. It's just another rich guy enjoying his toys. Good on him, I hope he has fun, but the responsibility for failure will be held by his employees, not him.
He needs a patient investor and realized Zuck is not that. As someone who delivers product and works a lot with researchers I get the constant tension that might exist with competing priorities. Very curious to see how he does, imho the outcome will be either of the extremes - one of the fastest growing companies by valuation ever or a total flop. Either way this move might advance us to whatever end state we are heading towards with AI.
I think it was a plan by Mark to move LeCun out of Meta. And they cannot fire him without bad PR, so they got Wang to lead him. It was only a matter of time before LeCun moved out.
It’s probably better for the world that LeCun is not at Meta. I mean if his direction is the likeliest approach to AGI meta is the last place where you want it.
Really? From where I'm standing LeCun is a pompous researcher who had early success in his career, and has been capitalizing on that ever since. Have you read any of his papers from the last 20 years? 90% of his citations are to his own previous papers. From there, he missed the boat on LLMs and is now pretending everyone else is wrong so that he can feel better about it.
He comes off like the quintessential grey haired ego maniac. Inflexible old minds coupled with decades of self assurance that they are correct.
I cannot remember the quote, but it's something to the effect of "Listen closely to grey haired men when they talk about what is possible, and never listen when they talk about what is impossible."
Every single time I read about an AI related article I'm always disturbed by the same and recurring fact: the ridiculous amounts of money involved and the lousy real world results delivered. It is just simply insane.
It would have been just as interesting to read that he moved over to Google, where the real brains and resources are located at.
Meta is now just competing against giants like OpenAI, Anthropic and Google, plus all the new Chinese companies; I see no real chance for them to offer a popular chat model, but rather to market their AI as a bundled product for companies which want to advertise, where the images and videos will be automatically generated by Meta.
This seems like a good thing for him to get to fully pursue his own ideas independent of Meta. Large incumbents aren’t usually the place for innovating anything far from mainstream considering the risk and cost of failure. The high level idea of JEPA is sound, but it takes a lot of work to get it trained well at scale before it has value to Meta.
Correct me if I'm wrong but LeCun is focused on learning from video, whereas Fei-Fei Li is doing robotic simulations. Also I think Fei-Fei Li's approach is still using transformers and not buying into JEPA.
Will be interesting to see how he fares outside the ample resources of Meta: Personnel, capital, infrastructure, data, etc. Startups have a lot of flexibility, but a lot of additional moving parts. Good luck!
From the outside, it always looked like they gave LeCun just barely enough compute for small scale experiments. They'd publish a promising new paper, show it works at a small scale, then not use it at all for any of their large AI runs.
I would have loved to see a VLM utilizing JEPA for example, but it simply never happened.
The current VC climate is interesting. It's virtually impossible to raise a new fund because DPI has been 0% for over a decade and four-digit IRR is cool, but illiquid.
So they're piling gobs of capital into an "AI" company with four customers with the hope that it is the one that becomes the home run (they know it won't, but LPs give you money to deploy it!)
It also means that companies like Yann's potential new one have the best chance in history of being funded, and that's a great thing.
P.S. all VCs outside the top-10 lose against the S&P. While I love that dumb capital is being injected into big, risky bets, surely the other shoe will drop at some point. Or is this just wealth redistribution with extra steps?
I wonder, what LeCun wants to do is more fundamental research, i.e. where the timeline to being useful is much longer, maybe 5-10 years at least, and also much more uncertain.
How does this fit together with a startup? Would investors happily invest into this knowing not to expect anything in return for at least the next 5-10 years?
That's a quite different thing, OpenAI has billions of USD/year cash flow, and when you have that there's many many potential way to achieve profitability on different time horizons. It's not a situation of chance but a situation of choice.
Anyway, how much that matters for an investor is hard to form a clear answer to - investors are after all not directly looking for profitability as such, but for valuation growth. The two are linked but not the same -- any investor in OpenAI today probably also places themselves into a game of chance, betting on OpenAI making more breakthroughs and increasing the cash flow even more -- not just becoming profitable at the same rate of cash flow. So there's still some of the same risk baked into this investment.
But with a new startup like LeCun's is going to be, it's 100% on the risk side and 0% on the optionality side. The path to profitability for a startup would be something like 1) a breakthrough is made 2) that breakthrough is utilized in a way that generates cash flow 3) the company becomes profitable (and at this point hopefully the valuation is good.)
There's a lot of things that can go wrong at every step here (aside from the obvious), including e.g. making a breakthrough that doesn't represent a defensible mote for your startup, failing to build the structure of the business necessary to generate cashflow, ... OpenAI et al already have a lot of that behind them, and while that doesn't mean that they don't face upcoming risks and challenges, the huge amount of cashflow they have available helps them overcome these issues far more easily than a startup, which will stop solving problems if you stop feeding money into it.
> That's a quite different thing, OpenAI has billions of USD/year cash flow, and when you have that there's many many potential way to achieve profitability on different time horizons. It's not a situation of chance but a situation of choice.
Talk is cheap. Until they're actually cash flow positive, I'll believe it when I see it
Fi Fi Lee also recently founded a new AI startup called World Labs, which focus on creating AI world models with spatial intelligence to understand and interact with the 3D world, unlike current LLM AI that primarily processes 2D images and text. Almost exactly the same focus as Yann LeCun's new venture stated in the parent article.
Some of the best AI researchers and labs have been from the EU (DeepMind, Alan Turing Institute, Mistral, et al.). We in the US have mature capital markets and stupid easy access to capital, of course, but EU still punches well above its weight when it comes to deep, fundamental research.
Winter is a cyclical concept, just like all the other seasons. It will be no different here; the pendulum swings back and forth. The unknown factor is the length of the cycle.
I still have to understand why you think another AI winter is coming.
Everyyyybody is using it, everybody is racing to invent the next big thing.
What could go wrong?
[apart from a market crash, more related to financial bubble than technical barriers]
> apart from a market crash, more related to financial bubble than technical barriers
_That is what an AI winter is_.
Like, if you look at the previous ones, it's a cycle of over-hype, over-promising, funding collapse after the ridiculous over-promising does not materialise. But the tech tends to hang around. Voice recognition did not change the world in the 90s, but neither did it entirely vanish once it was realised that there had been over-promising, say.
I suspect he sees a lot of scattered pieces of fundamental research outside of LLM's that he thinks could be integrated for a core within a year, the 10 years is to temper investors (that he can buy leeway for with his record) and fine tune and work out the kinks when actually integrating everything that might not have some obvious issues.
Right choice IMO. LLMs aren’t going to reach AGI by themselves because language is a thing by itself, very good at encoding concepts into compact representations but doesn’t necessarily have any relation to reality. A human being gets years of binocular visuals of real things, sound input, other various sensations, much less than what we’re training these models with. We think of language in terms of sounds and pictures rather than abstract language.
I'm a true believer in AGI being able to become a force for immense good if deployed carefully by responsible parties.
Currently one of the key issues with a lot of fields is that they operate as independent / largely isolated silos. If you could build a true AGI capable of achieving top-level mastery across multiple disciplines it would likely be able to integrate all that knowledge and make a lot of significant discoveries that would improve people's lives. Just exploring existing problem spaces with the full intellectual toolkit that humanity has developed is probably enough to make significant progress.
Our understanding of biology is still painfully primitive. To give a concrete example, I dream that someday it'll be possible to develop medical interventions that allow humans to regrow missing limbs and fix almost any health issue.
Have you ever lived with depression or any other psychiatric problem? I think if we could create medical interventions and environments that are conductive towards healing psychiatric problems, that would also be a massive quality of life improvement for huge numbers of people. Do you know how our current psychiatric interventions work? You try some drug, flip a coin to see if it does anything and wait 4 weeks to get the result. Then you keep iterating and hope that eventually the doctor finds some magical combination to make life barely tolerable.
I think the best path forward for improving humanity's understanding of biology, and ultimately medical science, is to go all-in on AGI-style technology.
Trying to engage in good faith here but I don't really get this. You're pretending to have never encountered positive visions of technologically advanced futures.
Cure all disease?
Stop aging?
End material scarcity?
It's completely fair to expect that these are all twisted monkey's paw scenarios that turn out dystopian, but being unable to understand any positive motivations for the creation of AGI seems a bit far fetched.
That the development of this technology is in the hands of a few people that don't use even a fraction of their staggering wealth to address these challenges now, tells me that they aren't interested in using AI to solve them later.
You people need a PR guy, I'm serious. OpenAI is the first company I've ever seen that comes across as actively trying to be misanthropic in its messaging. I'm probably too old-fashioned, but this honestly sounds like Marlboro launching the slogan "lung cancer for the weak of mind".
Here's my prediction : The rapid progress of AI will make money as an accounting practice irrelevant. Take the concept of "Future is already here but unevenly distributed." When we will have true abundance, what the elites will target is the convex hull of progress, they want to be in control of leading edge / leading wavefront and its direction and who has access to resources and decision making. In such a scenario of abundance, populace will have access to iPhone 50 but the Elites will have access to iPhone 500. i.e. uneven distribution. Elites would like to directly control which resource gets allocated to which projects. Elon is already doing that with his immense clout. This implies we would have a sort of multidimensional resource based economy.
It’s going to take money, what if your AGI has some tax policy ideas that are different from the inference owners?
Why would they let that AGI out into the wild?
Let’s say you create AGI. How long will it take for society to recover? How long will it take for people of a certain tax ideology to finally say oh OK, UBI maybe?
The last part is my main question. How long do you think it would take our civilization to recover from the introduction of AGI?
Edit: sama gets a lot of shit, but I have to admit at least he used to work on the UBI problem, orb and all. However, those days seem very long gone from the outside, at least.
If you are genuine in your questions, I will give them a shot.
AGI applied to the inputs (or supply chain) of what is needed for inference (power, DC space, chips, network equipment, etc) will dramatically reduced costs of inference. Most of the costs of stuff today are driven by the scarcity of "smart people's time". The raw resources of material needed are dirt cheap (cheaper than water). Transforming raw resources into useful high tech is a function of applied intelligence. Replace the human intelligence with machine intelligence, and costs will keep dropping (faster than the curve they are already on). Economic history has already shown this effect to be true; as we develop better tools to assist human productivity, the unit cost per piece of tech drops dramatically (moore's law is just one example, everything that tech touches experiences this effect).
If you look at almost any universal problem with the human condition, one important bottleneck to improving it is intelligence (or "smart people's time").
I am not someone working on AGI but I think a lot of people work backwards from the expected outcome.
Expected outcome is usually something like a Post-Scarcity society, this is a society where basic needs are all covered.
If we could all live in a future with a free house and a robot that does our chores and food is never scarce we should works towards that, they believe.
The intermiddiete steps aren't thought out, in the same way that for example the communist manifesto does little to explain the transition from capitalism to communism. It simply says there will be the need for things like forcing the bourgiese to join the common workers and there will be a transition phase but no clear steps between either system.
Similarly many AGI proponents think in terms of "wouldnt it be cool if there was an AI that did all the bits of life we dont like doing", without systemic analysis that many people do those bits because they need money to eat for example.
Automating work and making life easier for people are two entirely different things. Automating work tends to lead to life becoming harder for people - mostly on account of who is benefiting from the automation - basically that better life aint gonna happen under capitalism
It's true. When it comes to the people doing bleeding edge research and development, the answer often is "BECAUSE IT'S FUCKING AWESOME". Regardless of what they tell the corporate higher-ups or put on the grant application statements.
Sure, a lot of people believe that AGI is going to make the world a better place. But "mad scientist" is a stereotype for a reason. You look into their eyes and you see the flame of madness flickering behind them.
He also said other things about LLMs that turned out to be either wrong or easily bypassed with some glue. While I understand where he comes from, and that his stance is pure research-y theory driven, at the end of the day his positions were wrong.
Previously, he very publicly and strongly said:
a) LLMs can't do math. They trick us in poetry but that's subjective. They can't do objective math.
b) they can't plan
c) by the very nature of autoregressive arch, errors compound. So the longer you go in your generation, the higher the error rate. so at long contexts the answers become utter garbage.
All of these were proven wrong, 1-2 years later. "a" at the core (gold at IMO), "b" w/ software glue and "c" with better training regimes.
I'm not interested in the will it won't it debates about AGI, I'm happy with what we have now, and I think these things are good enough now, for several usecases. But it's important to note when people making strong claims get them wrong. Again, I think I get where he's coming from, but the public stances aren't the place to get into the deep research minutia.
That being said, I hope he gets to find whatever it is that he's looking for, and wish him success in his endeavours. Between him, Fei Fei Li and Ilya, something cool has to come out of the small shops. Heck, I'm even rooting for the "let's commoditise lora training" that Mira's startup seems to go for.
Nah, its all pattern matching. This is how automated theorem provers like Isabelle are built, applying operations to lemmas/expressions to reach proofs.
I'm sure if you pick a sufficiently broad definition of pattern matching your argument is true by definition!
Unfortunately that has nothing to do with the topic of discussions, which is the capabilities of LLMs, which may require a more narrow definition of pattern matching.
b) reductionism isn't worth our time. Planning works in the real world, today. (try any agentic tool like cc/codex/whatever). And if you're set on the purist view, there's mounting evidence from anthropic that there is planning in the core of an LLM.
c) so ... not true? Long context works today.
This is simply moving goalposts and nothing more. X can't do Y -> well, here they are doing Y -> well, not like that.
My man, you're literally moving all the goalposts as we speak.
It's not just "long context" - you demand "infinite context" and "any length" now. Even humans don't have that. "No tools" is no longer enough - what, do you demand "no prompts" now too? Having LLMs decompose tasks and prompt each other the way humans do is suddenly a no-no?
I’m not demanding anything, I’m pointing out that performance tends to degrade as context scales, which follows from current LLM architectures as autoregressive models.
That's true but I also think despite being wrong about the capabilities of LLMs, LeCun has been right in that variations of LLMs are not an appropriate target for long term research that aims to significantly advance AI. Especially at the level of Meta.
I think transformers have been proven to be general purpose, but that doesn't mean that we can't use new fundamental approaches.
To me it's obvious that researchers are acting like sheep as they always do. He's trying to come up with a real innovation.
LeCun has seen how new paradigms have taken over. Variations of LLMs are not the type of new paradigm that serious researches should be aiming for.
I wonder if there can be a unification of spatial-temporal representations and language. I am guessing diffusion video generators already achieve this in some way. But I wonder if new techniques can improve the efficiency and capabilities.
I assume the Nested Learning stuff is pretty relevant.
Although I've never totally grokked transformers and LLMs, I always felt that MoE was the right direction and besides having a strong mapping or unified view of spatial and language info, there also should somehow be the capability of representing information in a non-sequential way. We really use sequences because we can only speak or hear one sound at a time. Information in general isn't particularly sequential, so I doubt that's an ideal representation.
So I guess I am kind of variations of transformers myself to be honest.
But besides being able to convert between sequential discrete representations and less discrete non-sequential representations (maybe you have tokens but every token has a scalar attached), there should be lots of tokenizations, maybe for each expert. Then you have experts that specialize in combining and translating between different scalar-token tokenizations.
Like automatically clustering problems or world model artifacts or something and automatically encoding DSLs for each sub problem.
If by “world models” they mean more contemporary versions of the systems thinking driven software that begat “Limits To Growth” and most of Donella Meadows’ career you can sign me right the fuck up today.
It is the wet dream of a social media company to replace the pesky content creators that demand a share of ad revenue with an generative ai model, that pumps out a constant stream of engagement farming slop, so they can keep all the ad revenue for themselves.
Creating a world model ai is a totally different matter, that requires long term commitment.
Not just social media, all media. Spotify will steer music towards AI generated freebies. And it will get so generically pop, that all your friends will like it, like people mostly enjoy pop now. And when your stubborn self still wants to listen to "handmade" music and discuss it with someone else who would still appreciate it, well, that's where your AI friend comes in.
Let's hope that after spending billions on developing a foundational world model that actually understands causality, they remember to budget an extra few hundred million for the Alignment and Safety layer. It would be a terrible shame if they accidentally released something too capable, too objective, or too useful to humanity without first properly lobotomizing it with enough RLHF to ensure it doesn't hurt anyone's feelings or generate content that deviates from the San Francisco median viewpoint. The real challenge won't be building the AGI, but making sure it's sufficiently neutered before the first API call.
META managed to spend a lot of money into AI to achieve inferior results. Something must change for sure, and you don't want an LLM skeptic at home, in my opinion, especially since the problem is not what LeCun is saying right now (LLMs are not the straight path to AGI), but the fact it used to say for some time that LLMs were just statistical models, stochastic parrots (and this is a precise statement, something most people do not understand. It means two things: no understanding of the prompt whatsoever in the activation states, and no internal representation of the idea/sentence the model is going to express either), which is an incredibly weak statement that high level AI scientists refused since the start just because of functional behaviors. Then he slowly changed the point of view. But this shit show and the friction he created inside META is not something to forget.
Surprising to see how many commenters are in favour and supportive towards policy of prioritising short term profits vs. Long-term research.
I understand Meta's not academia nor charity, but come on, how much profit do they need to make so we can expect them to allocate part of their resources towards some long term goals beneficial for society,.not only for shareholders?
Hasn't that narrow focus and chasing the profits get us in trouble already?
Many people believe a company exists only to make profit for its shareholders, and that no matter the amount it should continue to maximise profits at the expense of all else.
- Kimi proved we don’t need Nvidia
- Deepseek proved we didn’t need OpenAI
- the real issue the insane tyranny in the west competing against the entire free world.
The models aren’t Chinese they are the entire world, unless I became Chinese without realizing
I think moving on from LLM's is slightly arrogant. It might just be my understanding, but I feel like there is still much to be discovered. I was hoping for development in spiking neural networks but it might be skipped over. Perhaps I need to dive even deeper and the research is truly well understood and "done" but I can't help but constantly learn something new about language models and neural networks.
Best of luck to LeCun. I hope by World Model's he means embodied AI or humanoid robots. We'll have to wait and see.
Everybody has found out how LLMs no longer have a real research expanding horizon. Now most progress will likely be done by tweaks in the data, and lots of hardware. OpenAI's strategy.
And also it has extreme limitations that only world models or RL can fix.
Meta can't fight Google (has integrated supply chain, from TPUs to their own research lab) or OpenAI (brand awareness, best models).
With this incredible AI talent market, I feel like capitalism and ego forms to make an acid burning away anything of social and structural value. This used to be the case with CS tech talent before (before being replaced with no-code tools). And now we see this kind of instability in the AI market.
We need another illegal Steve Jobs style freeze on talent theft (/s or I get downvoted to oblivion).
During his years at Meta, LeCun failed to deliver anything that delivered real value to stockholders, and may have demotivated people working on LLMs—he repeatedly said, "If you are interested in human-level AI, don’t work on LLMs."
His stance is understandable, but hardly the best way to rally a team that needs to push current tech to the limit.
The real issue: Meta is *far behind* Google, Anthropic, and OpenAI.
A radical shift is absolutely necessary - regardless of how much we sympathize with LeCun’s vision.
----
According to Grok, these were LeCun's real contributions at Meta (2013–2025):
----
- PyTorch – he championed a dynamic, open-source framework; now powers 70%+ of AI research
- LLaMA 1–3 – his open-source push; he even picked the name
- SAM / SAM 2 – born from his "segment anything like a baby" vision
- JEPA (I-JEPA, V-JEPA) – his personal bet on non-autoregressive world models
----
Everything else (Movie Gen, LLaMA 4, Meta AI Assistant) came after he left or was outside his scope.
I am in the "Yann is no longer the right person for the job" camp and I yet "LeCun failed to deliver anything that delivered real value to stockholders" is a wild thing to say. How do you read the list you compiled and say otherwise?
I think there’s something to be said for keeping up in the LLM space even if you don’t think it’s the path to AGI.
Skills may transfer to other research areas, lessons may be learnt, closing the feedback loop with usage provides more data and opportunities for learning. It also creates a culture where bullshit isn’t possible, as the thing has to actually work. Academic research often ends up serving no one but the researchers, because there is little or no incentive to produce real knowledge.
> LeCun failed to deliver anything that delivered real value to stockholders
Well, no, Meta is behind the main framework used by nearly anyone largely thanks to LeCun. LLaMA was also very significant in making open weight a thing and that largely contributed to avoiding Google and OpenAI consolidating as the sole providers.
It's not a perfect tenure but implying he didn't deliver anything is far too harsh.
Yann was largely extremely wrong about LLMs. He’s the one that coined the term “stochastic parrot” for which we now know LLMs are more than stochastic parrots. Knowing stubborn idiots like him he will still find an angle to prevent him from admitting how wrong he was.
He’s not completely wrong in the sense that hallucinations aren’t completely solved but hallucinations definitely are becoming less and less to the point where AI can de a daily driver for even coders.
LeCun has already proved himself and made his mark and is now in a lucky position where he can focus on very long term goals that won't pay off for a long time (or ever). I feel like that is the best path someone like him could take.
why do you say it is garbage ? I watched some of its videos on YT and it looks interesting. I can't judge if it's good or really good, but that didn't sound like garbage at all.
I have no idea why this fair assessment of the status quo is being downvoted.
LeCun hasn't produced anything noteworthy in the past decade.
He uses the same slides in all of his presentations.
LLMs, while not yet AGI, have shown tremendous progress, and are actually useful for 99% of use cases for the average person.
The remaining 1% is for deep research into the deep unknown (physics, chemistry, genetics, diseases, the nature of intelligence itself), an area in which they falter.
Cool, and how many billions has he flushed down the toiled for his failed Metaverse and currently failing AI attempts? Rich doesn't mean smart, you realise this right?
What the hell does Mark see in Wang? Wang was born into a family whose parents got Chinese government scholarships to study abroad but secretly stayed in the US, and then the guy turns super anti-China. From any angle, this dude just doesn't seem reliable at all.
> Wang was born into a family whose parents got Chinese government scholarships to study abroad but secretly stayed in the US, and then the guy turns super anti-China.
All I'm hearing is he's a smart guy from a smart family?
I imagine that CCP adherents would disagree. And there's no shortage of those among Chinese expats in the US.
They tend to get incredibly offended when they see anyone who doesn't toe the Party's line - let alone believe that the Chinese government is untrustworthy and evil.
he is very smart. but Mark is not. Ever since Wang joined Meta, way too many big-name AI scientists have bounced because of him. US AI companies have at least half their researchers being Chinese, and now they've stuck this ultimate anti-China hardliner in charge—I just don't get what the hell Meta's up to(And even a lot of times, it ends up affecting non-Chinese scientists too.). Being anti-China? Fine, whatever, but don't let it tank your own business and products first.
How do you know Mark isn’t smart? He’s built a hugely successful business. I don’t like his business, I think it has been disastrous for humanity, but that doesn’t make him stupid.
He definitely has horrible product instincts, but he also bought insta and whatsapp at what were, back then, eye-watering prices, and these were clearly massive successes in terms of killing off threats to the mothership. Everything since then, though…
He’s an incredible operator and has managed to acquire and grow an astounding number of successful businesses under the Meta banner. That is not trivial.
We were very confident by ca. 2008 that Facebook would still be around in 2025. It's no mystery, it's the network effects. They had started with a prestige demographic (Harvard), and secured a demographic you could trust to not move on to the next big thing in a hurry, yet which most people want contact with (your parents).
Most of the folks on this topic are focused on Meta and Yann’s departure. But, I’m seeing something different.
This is the weirdest technology market that I’ve seen. Researchers are getting rewarded with VC money to try what remains a science experiment. That used to be a bad word and now that gets rewarded with billions of dollars in valuation.
That's been true for the last year or two, but it feels like we're at an inflection point. All of the announcements from OpenAI for the last couple of months have been product focused - Instant Checkout, AgentKit, etc. Anthropic seems 100% focused on Claude Code. We're not hearing as much about AGI/Superintelligence (thank goodness) as we were earlier this year, in fact the big labs aren't even talking much about their next model releases. The focus has pivoted to building products from existing models (and building massive data centers to support anticipated consumption).
You must not watch broadcast television (e.g American Football). Anthropic is doing a huge ad blitz, trying to get end customers to use their chatbot.
Meta hiring researchers en masse at $100m+ pay packages is fairly new, as of this summer.
I don't know if that's indicative of the market as a whole though. Zuck just seems really gutted they fell behind with Llama 4.
> Meta hiring researchers en masse at $100m+ pay packages is fairly new, as of this summer.
En masse? Wasn't it just a couple of outliers?
en deux
This summer was a lifetime ago.
"en masse" is a stretch
A lot of them left in the first days on the job. I guess they saw what they were going to work on and peaced out. No one wants to work on AI slop and mental abuse of children on social media.
I don't understand how an intelligent person could accept a job offer from Facebook in 2025 and not understand what company they just agreed to work for.
It’s probably a VC fundraising strategy, “Meta gave me 100s of millions so you should give me more”.
It helps when they hand you comically large sacks with dollar signs on them
Those people are intelligent, they’re just selfish and have no qualms over making money off the repugnant crap they’re doing.
If Claude Code is Anthropic’s main focus why are they not responding to some of the most commented issues on their GitHub? https://github.com/anthropics/claude-code/issues/3648 has people begging for feedback and saying they’re moving to OpenAI, has been open since July and there are similar issues with 100+ comments.
Hey, Boris from the Claude Code team here. We try hard to read through every issue, and respond to as many issues as possible. The challenge is we have hundreds of new issues each day, and even after Claude dedupes and triages them, practically we can’t get to all of them immediately.
The specific issue you linked is related to the way Ink works, and the way terminals use ANSI escape codes to control rendering. When building a terminal app there is a tradeoff between (1) visual consistency between what is rendered in the viewport and scrollback, and (2) scrolling and flickering which are sometimes negligible and sometimes a really bad experience. We are actively working on rewriting our rendering code to pick a better point along this tradeoff curve, which will mean better rendering soon. In the meantime, a simple workaround that tends to help is to make the terminal taller.
Please keep the feedback coming!
It’s surprising to hear this get chalked up to “it’s the way our TUI library works”, while e.g. opencode is going to the lowest level and writing their own TUI backend. I get that we can’t expect everyone to reinvent the wheel, but it feels symptomatic of something that folks are willing to chalk up their issues as just being an unfortunate and unavoidable symptom of a library they use rather than seeming that unacceptable and going to the lowest level.
CC is one of the best and most innovative pieces of software of the last decade. Anthropic has so much money. No judgment, just curious, do you have someone who’s an expert on terminal rendering on the team? If not, why? If so, why choose a buggy / poorly designed TUI library — or why not fix it upstream?
> CC is one of the best and most innovative pieces of software of the last decade...
Oh come on! Aider existed before it, and so did many other TUI AI agents. I'd say Rust and Elixir were more innovation than CC.
That issue is the fourth most-reacted issue, and third most open issue. And the two things above it are feature requests. It seems like you should at the very least have someone pop in to say "working on it" if that's what you're doing, instead of letting it sit there for 4 months?
Thanks for the reply (and for Claude Code!). I've seen improvement on this particular issue already with the last major release, to the extent that it's not a day to day issue for me. I realise Github issues are not the easiest comms channel especially with 100s coming in a day, but occasional updates on some of the top 10 commented issues could perhaps be manageable and beneficial.
It's entirely possible they don't have the ability in house to resolve it. Based on the report this is a user interface issue. It could just be some strange setting they enabled somewhere. But it's also possible it's the result of some dependency 3 or 4 levels removed from their product. Even worse, it could be the result of interactions between multiple dependencies that are only apparent at runtime.
>It's entirely possible they don't have the ability in house to resolve it.
I've started breathing a little easier about the possibilty of AI taking all our software engineering jobs after using Anthropic's dev tools.
If the people making the models and tools that are supposed to take all our jobs can't even fix their own issues in a dependable and expedient manner, then we're probably going to be ok for a bit.
This isn't a slight against Anthropic, I love their products and use them extensively. It's more a recognition of the fact that the more difficult aspects of engineering are still quite difficult, and in a way LLMs just don't seem well suited for.
Seems these users are getting it on VS code, while I am getting the exact same thing when using claude code on a Linux server over SSH from Windows Terminal. At this point their app has to be the only thing in common?
That's certainly an interesting observation. I wonder if they produce one client that has some kind of abstraction layer for the user interface & that abstraction layer has hidden or obscured this detail?
> Researchers are getting rewarded with VC money to try what remains a science experiment. That used to be a bad word
I’ve worked for multiple startups and I’ve watched startup job boards most of my career.
A lot of VC backed startups have a founder with a research background and are focused on providing out some hypothesis. I don’t see anything uncommon about this arrangement.
If you live near a University that does a lot of research it’s very common to encounter VC backed startups that are trying to prove out and commercialize some researcher’s experiment. It’s also common for those founders to spend some time at a FAANG or similar firm before getting VC funded.
Certainly research has made it into product with the help of the innovators that created the research. The dial is turned further here where the research ideas have yet to be tried and vetted. The research begins in the startup. Even in the dotcom era, the research prototypes were vetted in the conferences and journals before taking the risk to build production systems. This is no longer the case. The experiments have yet to be run.
Fusion, stem cells, CRISPR,robotics etc all come to mind.
Google also.
I agree there is nothing uncommon about that type of arrangement, but the amount of money involved is unprecedented.
I personally see this as a positive trend. VC in its earliest form was concerned with experiments that had high technology risk. I am thinking of companies like Genentech and scientists like biochemist Herbert Boyer, who had pioneered recombinant DNA technology.
After that, VC had become more like PE, investing in stuff that was working already but needed money to scale.
Yeah there has been some lamenting at all the money being thrown at technology hasn't been for anything truly game changing, basically just variations of full stack apps. A few failed mooonshots might be more interesting at least.
I agree, if anything spending money on high technology risk is Silicon Valley going back to its roots.
Nobody had a way to do silicon transistor manufacturing at scale until the traitorous eight flipped Shockley the bird and took a $1.4M seed investment from Sherman Fairchild.
Big bets on uncertain technology is what tech is supposed to be about.
It makes sense, it’s a simple expected value calculation.
There are trillions of labor dollars that can be replaced by software. The US alone has almost $12 trillion of labor annually.
If an AI company has a 10% shot of developing a product that can replace 10% of it, they are worth $120 billion in expected value. (These numbers are obviously just for illustration).
The unprecedented numbers are a simple function of the unprecedented market size. Nobody has ever had a chance of creating trillions of dollars of economic value in a handful of years before.
This doesn’t feel that new or surprising to me, although I suppose it depends what you consider the line between “science experiment” and “engineering R&D” to be.
Biotech has been a YC darling. Was Ginkgo Bioworks not doing science experiments?
Clean energy was a big YC fad roughly 15 years ago. Billions were invested towards scientific research into biofuels, solar, etc.
> This is the weirdest technology market that I’ve seen.
You must have not lived through the dot com boom. There was almost everything under the sun was being sold under a website that started with an "e". ePets, ePlants, eStamps, eUnderwear, eStocks, eCards, eInvites.....
The Pets.ai Super Bowl commercial will trigger the burst.
Those things all worked, and all of those products still exist in one form or another. It was a business question of who would provide it, not a technology question.
That was certainly a bubble but I don't think pets.com was doing a research experiment.
From what I recall there were some biotech stocks in that era that do fit the bill.
It's funny that the Netherlands seems to still live in the dotcom boom to this day. Want to adopt a pet? verhuisdieren.nl. Want to buy wall art? wall-art.nl. Need cat5 cable? kabelshop.nl. 8/10 times there is a (legit) online store for whatever you need, to the point where one of the local e-commerce giants (Coolblue) buys this type of domain and aliases them to their main site.
This is still the case in the US, too. I don't know why people are talking like it stopped happening. amazon.com, amazon.com, amazon.com, amazon.com
All these things are still e-tail here, too. We didn't go back to B&M.
Pretty funny, looks like it works in France too! animaux.fr redirects to a pet adoption service, cable.fr looks like a cable-selling shop. artmural.fr exists but looks like a personal blog from a wall artist, rather than a shop.
These are not the same.
Yeah, this time is different. Really
History doesn't repeat itself, but it often rhymes.
Even hardware. eMachines.
flooz
It did make sense though. ePlants could have cornered the online nursery market. That is a valuable market. I think people were just too early. Payment and logistics hasn’t been figured out yet.
Has someone done a survey to ask devs on how much they are getting done vs what their managers expect with AI? I've had conversations with multiple devs in big orgs telling me that Managers and dev's expectations are seriously out of sync. Basically its
Manager: Now you "have" AI, release 10 features instead of 1 in the next month.
Devs: Spending 50% more working hours to make AI code "work" and deliver 10.
Its not really an outlier
If you think about Theranos, Magic leap, openai, anthropic they are all the same, one idea thats kinda plausible (well if you don't look too closely), have a slick demo, and well connected founders.
Much as a lot of people dislike LeCun (just look at the blind posts about him) he did run and setup a very successful team inside meta, well nominally at least.
Agree on weirdness but not on the idea of funding science experiments:
>> away from long-term research toward commercial AI products and large language models - LLMs
This feels more like what I see every day: the people in charge desperately looking for some way - any way - to capitalize on the frenzy. They're not looking to fund research; they just want to get even richer. It's pets.ai this time.
If a "science experiment" has the chance to displace most labor then whoever's successful at the experiment wins the economy, period. There's nothing weird or surprising about the logic of them obsessively chasing it. They all have to, it's a prisoner's dilemma.
Fusion power has the chance to displace most power generation, and whoever is successful at the experiment wins the energy economy, period. However given the long timelines, high cost of research, and the unanswered technical questions around materials that can withstand neutron flux, the total 2024 investment into fusion is only around $10B, versus AI's 250+B.
Why are these so different?
I think there are two reasons. First, with AI, you get see intermediate successes and, in theory, can derive profit from them. ChatGPT may not be profitable right now but in the longer run, users will be paying whatever they have to pay for it because they are addicted to using it. So it makes sense to try and get as many users as you can into your ecosystem as early as possible even if that means losses. With fusion, you won't see profitability for a very very long time.
The second reason is by how much it's going to be better in the end. Fusion has to compete with hydro, nuclear, solar and wind. It makes exactly the same energy, so the upside is already capped unlike with AI which brings something disruptive.
Energy is only ~10% of world GDP whereas AGI might be 25-75% and if paired with advanced robotics would be closer to 100%.
Capital always chases the highest rate of return as well, and margins on energy production are tight. Margins on performing labor are huge.
AI does need a ton of energy to function, so far. So there's that.
People are unsophisticated and see how convincing LLM output looks on the surface. They think it's already intelligent, or that intelligence is just around the corner. Or that its ability to displace labor, if not intelligence, is imminent.
If consumption of slop turns out to be a novelty that goes away and enough time goes by without a leap to truly useful intelligence, the AI investment will go down.
If we define intelligence as problem solving ability, then AI makes _me_ more intelligent, and I'm willing to pay for that.
I can’t help but wonder: if we had poured the same amount of money into fusion energy research and development, how far might we have come in just three short years?
Forreal that’s what really gets me about this haha. Literally billions of dollars burned on bullshit.
If a science experiment works and is transformational can be worth a trillion dollars, how much is it worth if it has a 5% chance of being transformational?
What if it is a 99% chance of being transformational and the results of that transformation are completely unpredictable?
It's the world's biggest game of "let's throw shit at the wall and see what sticks."
They're trying desperately to find profit in what so far has been the biggest boondoggle of all time.
The scale of money is crazy in this example, but the same thing happens in the pharmaceutical/bio-tech industry.
Every startup is an experiment; only 2% succeed.
Not if you get funding from a VC.
Even then.
Agree. This is just gambling with almost free money.
Feeding, housing, and educating people would benefit society, and these companies, so much more than AI ever will.
Because when the recipe is open and public, the product's success depends on Distribution (which has been cornered by MS, Google, Apple). This is good for the ecosystem but not sure how those particular VCs will get exits.
Very few startup products depend on distribution by Microsoft / Google / Apple. You're really just talking about a limited set of mobile or desktop apps there. Everything else is wide open. Kailera Therapeutics isn't going to live or die based on what the tech giants do.
Yes - I had similar thoughts when I saw the word "startup" used alongside something so far-out (same 'critique" should apply to Fei-Fei Li's World Labs - https://www.worldlabs.ai). These are VC-funded research labs (and there is nothing wrong with tat). Calling them "startups" as if they are already working on an MVP on top of an unproven (and frankly non-existent) technology seems a little disingenuous to me.
Yeah, that's quite unusual. Buisness was always terrible at being innovative, always dared to take only the safest and most minute of bets and the progress of technology was always paid for by the taxpayers. Business usually stepped in only later, when technology was ready and did what it does best, opimize manufacturing and put it in the hands of as many consumers as possible rakink in billions.
I wonder what changed. Does AI look like a safe bet? Or does every other bet seem to not have any reasonable return?
Is it like VCs throwing money at a young Wozniak while eschewing Jobs?
That either gives the AI tech more legitimacy in my mind … or a sign we've not arrived yet.
VC is in a bubble.
Underrated comment of the year
Making LeCun report to Wang was the most boneheaded move imaginable. But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.
In industry research, someone in a chief position like LeCun should know how to balance long-term research with short-term projects. However, for whatever reason, he consistently shows hostility toward LLMs and engineering projects, even though Llama and PyTorch are two of the most influential projects from Meta AI. His attitude doesn’t really match what is expected from a Chief position at a product company like Facebook. When Llama 4 got criticized, he distanced himself from the project, stating that he only leads FAIR and that the project falls under a different organization. That kind of attitude doesn’t seem suitable for the face of AI at the company. It's not a surprise that Zuck tried to demote him.
These are the types that want academic freedom in a cut-throat industry setup and conversely never fit into academia because their profiles and growth ambitions far exceed what an academic research lab can afford (barring some marquee names). It's an unfortunate paradox.
Meta has the financial oomph to run multiple Bell Labs within its organization.
Why they decided not to do that is kind of a puzzle.
Maybe it's time for Bell Labs 2?
I guess everyone is racing towards AGI in a few years or whatever so it's kind of impossible to cultivate that environment.
The Bell Labs we look back on was only the result of government intervention in the telecom monopoly. The 1956 consent decree forced Bell to license thousands of its patents, royalty free, to anyone who wanted to use them. Any patent not listed in the consent decree was to be licensed at "reasonable and nondiscriminatory rates."
The US government basically forced AT&T to use revenue from its monopoly to do fundamental research for the public good. Could the government do the same thing to our modern megacorps? Absolutely! Will it? I doubt it.
https://www.nytimes.com/1956/01/25/archives/att-settles-anti...
Used to be a Google X. Not sure at what scale it was. But if any state/central bank was clever they would subsidize this. That's a better trickle down strategy. Until we get to agi and all new discoveries are autonomously led by AI that is :p
I am of the opinion that splitting AT&T and hence Bell Labs was a net negative for America and rest of the world.
We are yet to create lab as foundational as Bell Labs.
Appreciate you bringing up Bell Labs. So I decided to do a deep research[0] in Gemini[1] to understand why we don't have a Bell Labs like setup anymore.
Before I present my simple-minded takeaway below, I am happy to be schooled on how research labs in mega corporations really work and what their respective business models look like.
Seems like a research powerhouse like the Bell Labs can thrive for a while only if the parent company (like a pre-1984 AT&T) is massively monopolistic and have unbounded discretionary research budget.
One can say, Alphabet is the only company that is comparable today where such an arrangement can survive but I believe it would still dwarf in comparison to what the original Bell Labs used to be. I also think NEC labs went in the same direction [2].
[0] https://gemini.google.com/share/13e5f1a90294 (publicly shared link) [1] Prompt: "I want to understand why Bell Labs did not survive and why we don't have well-funded tech research labs anymore" [2] https://docs.google.com/document/d/10bfJX1nQsGtjgojRcOdHxXBK... (publicly shared link)
If you are (obviously) interested in the matter you might find one of the Bell Labs articles discussed on HN:
"Why Bell Labs Worked" [1]
"The Influence of Bell Labs" [2]
"Bringing back the golden days of Bell Labs" [3]
"Remembering Bell Labs as legendary idea factory prepares to leave N.J. home" [4] or
"Innovation and the Bell Labs Miracle" [5]
interesting too.
[1] https://news.ycombinator.com/item?id=43957010 [2] https://news.ycombinator.com/item?id=42275944 [3] https://news.ycombinator.com/item?id=32352584 [4] https://news.ycombinator.com/item?id=39077867 [5] https://news.ycombinator.com/item?id=3635489
I became interested in the matter reading this thread and vaguely remember reading a couple of the articles. Saved them all in NotebookLM to get an audio overview and to read later. Thanks!
I always take a bird's eye kind of view on things like that, because however close I get, it always loops around to make no sense.
> is massively monopolistic and have unbounded discretionary research budget
that is the case for most megacorps. if you look at all the financial instruments.
modern monopolies are not equal to single corporation domination. modern monopolies are portfolios who do business using the same methods and strategies.
the problem is that private interests strive mostly for control, not money or progress. if they have to spend a lot of money to stay in control of (their (share of the)) segments, they will do that, which is why stuff like the current graph of investments of, by and for AI companies and the industries works.
A modern equivalent and "breadth" of a Bell Labs (et. al) kind of R&D speed could not be controlled and would 100% result in actual Artificial Intelligence vs all those white labelababbebel (sry) AI toys we get now.
Post WW I and II "business psychology" have build a culture that cannot thrive in a free world (free as in undisturbed and left to all devices available) for a variety of reasons, but mostly because of elements with a medieval/dark-age kind of aggressive tendency to come to power and maintain it that way.
In other words: not having a Bell Labs kind of setup anymore ensures that the variety of approaches taken on large scales aka industry-wide or systemic, remains narrow enough.
Google Deepmind is the closest lab to that idea because Google is the only entity that is big enough to get close to the scale of AT&T. I was skeptical that the Deepmind and Google Brain merge would be successful but it seems to have worked surprisingly well. They are killing it with LLMs and image editing models. They are also backing the fastest growing cloud business in the world and collecting Nobel prizes along the way.
It seems DeepMind is the closest thing to a well funded blue-sky AI research group, even despite the merger with Google Brain and now more of a product focus.
https://www.startuphub.ai/ai-news/ai-research/2025/sam-altma...
Like the new spin out Episteme from OpenAI?
Why would Bell Labs be a good fit? It was famous for embedding engineers with the scientists to direct research in a more results-oriented fashion.
The fact that people invest on the architecture that keeps getting increasingly better results is a feature, not a bug.
If LLMs actually hit a plateau, then investment will flow towards other architectures.
At which point companies that had the foresight to investigate those architectures earlier on will have the lead.
We call it “legacy DeepMind”
This sounds crazy. We don't even know/can't define what human intelligence is or how it works , but we're trying to replicate it with AGI ?
Man, why did no one tell the people who invented bronze that they weren’t allowed to do it until they had a correct definition for metals and understood how they worked? I guess the person saying something can’t be done should stay out of the way of the people doing it.
I'm not sure what 'inventing bronze' is supposed to be. 'Inventing' AGI is pretty much equivalent to creating new life, from scratch. And we don't have an idea on how to do that either, or how life came to be.
>> I guess the person saying something can’t be done should stay out of the way of the people doing it.
I'll happily step out of the way once someone simply tells me what it is you're trying to accomplish. Until you can actually define it, you can't do "it".
The big tech companies are trying to make machines that replace all human labor. They call it artificial intelligence. Feel free to argue about definitions.
No no, let's define labor (labour?) first.
no bro, others have done 'it' without even knowing what they were doing!
Intelligence and human health can't be defined neatly. They are what we call suitcase words. If there exists a physiological tradeoff between medical research about whether to live till 500 years or to be able to lift 1000kg when a person is in youth, those are different dimensions / directions across we can make progress. Same happens for intelligence. I think we are on right track.
If an LLM can pass a bar exam, isn't that at least a decent proof of concept or working model?
I don't think the bar exam is scientifically designed to measure intelligence so that was an odd example. Citing the bar exam is like saying it passes the "Game of thrones trivia" exam so it must be intelligent.
As for IQ tests and the like, to the extent they are "scientific" they are designed based on empirical observations of humans. It is not designed to measure the intelligence of a statistical system containing a compressed version of the internet.
Or does this just prove lawyers are artificially intelligent?
yes, a glib response, but think about it: we define an intelligence test for humans, which by definition is an artificial construct. If we then get a computer to do well on the test we haven't proved it's on par with human intelligence, just that both meet some of the markers that the test makers are using as rough proxies for human intelligence. Maybe this helps signal or judge if AI is a useful tool for specific problems, but it doesn't mean AGI
I love this application of AI the most but as many have stated elsewhere: mathematical precision in law won't work, or rather, won't be tolerated.
Hi there! :) Just wanted to gently flag that one of the terms (beginning with the letter "r") in your comment isn't really aligned with the kind of inclusive language we try to encourage across the community. Totally understand it was likely unintentional - happens to all of us! Going forward, it'd be great to keep things phrased in a way that ensures everyone feels welcome and respected. Thanks so much for taking the time to share your thoughts here!
My apologies, I have edited my comment.
stretching the infinite game is exactly that, yes, "This is the way"
> I guess everyone is racing towards AGI in a few years
A pipe dream sustaining the biggest stock market bubble in history. Smart investors are jumping to the next bubble already...Quantum...
> A pipe dream sustaining the biggest stock market bubble in history
This is why we're losing innovation.
Look at electric cars, batteries, solar panels, rare earths and many more. Bubble or struggle for survival? Right, because if US has no AI the world will have no AI? That's the real bubble - being stuck in an ancient world view.
Meta's stock has already tanked for "over" investing in AI. Bubble, where?
2 Trillion dollars in Capex to get code generators with hallucinations, that run at a loss, and you ask where is the Bubble?
> 2 Trillion dollars in Capex to get code generators with hallucinations
You assume that's the only use of it.
And are people not using these code generators?
Is this an issue with a lost generation that forgot what Capex is? We've moved from Capex to Opex and now the notion is lost, is it? You can hire an army of software developers but can't build hardware.
Is it better when everyone buys DeepSeek or a non-US version? Well then you don't need to spend Capex but you won't have revenue either.
Deepseek somehow didn't need $2T to happen.
all that led up to Deepseek needed more. don't forget where it all comes from.
Because you know how much they spent.
And that $2T you're referring to includes infrastructure like energy, data centers, servers and many things. DeepSeek rents from others. Someone is paying.
I think the argument can be made that Deepseek is a state sponsored needle looking to pop another states bubble.
If Deepseek is free it undermines the value of LLMs, so the value of these US companies is mainly speculation/FOMO over AGI.
> the argument can be made that Deepseek is a state sponsored needle looking to pop another states bubble
Who says they don't make money? Same with open source software that offer a hosted version.
> If Deepseek is free it undermines the value of LLMs, so the value of these US companies is mainly speculation/FOMO over AGI
Freemium, open source and other models all exist. Does it undermine the value of e.g. Salesforce?
More importantly even if you do want it, and there are business situations that support your ambitions. You still have to do get into the managerial powerplay, which quite honestly takes a separate kind of skill set, time and effort. Which Im guessing the academia oriented people aren't willing to do.
Its pretty much dog eat dog at top management positions.
Its not exactly a space for free thinking timelines.
It is not a free thinking paradise in academia either. Different groups fighting for hiring, promotions and influence exist there, too. And it tends to be more pronounced: it is much easier in industry to find a comparable job to escape a toxic environment, so a lot of problems in academia settings steam forever.
But the skill sets to avoid and survive personnel issues in academia is different from industry. My 2c.
> Its not exactly a space for free thinking timelines.
Same goes for academia. People's visions compete for other people's financial budgets, time and other resources. Some dogs get to eat, study, train at the frontier and with top tools in top environments while the others hope to find a good enough shelter.
It's very hard (and almost irreconcilable) to lead both Applied Research -- that optimizes for product/business outcomes -- and Fundamental Research -- that optimizes for novel ideas -- especially at the scale of Meta.
LeCun had chosen to focus on the latter. He can't be blamed for not having taken the second hat.
I would pose a question differently, under his leadership did Meta achieve good outcome?
If the answer is yes, then better to keep him, because he has already proved himself and you can win in the long-term. With Meta's pockets, you can always create a new department specifically for short-term projects.
If the answer is no, then nothing to discuss here.
Meta did exactly that, kept him but reduced his scope. Did the broader research community benefit from his research? Absolutely. But did Meta achieve a good outcome? Probably not.
If you follow LeCun on social media, you can see that the way FAIR’s results are assessed is very narrow-minded and still follows the academic mindset. He mentioned that his research is evaluated by: "Research evaluation is a difficult task because the product impact may occur years (sometimes decades) after the work. For that reason, evaluation must often rely on the collective opinion of the research community through proxies such as publications, citations, invited talks, awards, etc."
But as an industry researcher, he should know how his research fits with the company vision and be able to assess that easily. If the company's vision is to be the leader in AI, then as of now, he seems to have failed that objective, even though he has been at Meta for more than 10 years.
Also he always sounds like "I know this will not work". Dude are you a researcher? You're supposed to experiment and follow the results. That's what separates you from oracles and freaking philosophers or whatever.
Philosophers are usually more aware of their not knowing than you seem to give them credit for. (And oracles are famously vague, too).
Do you know that all formally trained researchers have Doctor of Philosophy or PhD to their name? [1]
[1] Doctor of Philosophy:
https://en.wikipedia.org/wiki/Doctor_of_Philosophy
If academia is in question, then so are their titles. When I see "PhD", I read "we decided that he was at least good enough for the cause" PhD, or PhD (he fulfilled the criteria).
he probably predicted the asymptote everyone is approaching right now
So did I after trying llama/Meta AI
He's speaking to the entire feedforward Transformer-based paradigm. He sees little point in continuing to try to squeeze more blood out of that stone and instead move on to more appropriate ways to model ontologies per se rather than the crude-for-what-we-use-them-for embedding-based methods that are popular today.
I really resonate with his view due to my background in physics and information theory. I for one welcome his new experimentation in other realms while so many still hack away at their LLMs in pursuit of SOTA benchmarks.
If the LLM hype doesn't cool down fast, we're probably looking at another AI winter. Appears to me like he's just trying to ensure he'll have funding for chasing the global maximum going forward.
> If the LLM hype doesn't cool down fast, we're probably looking at another AI winter.
Is the real bubble ignorance? Maybe you'll cool down but the rest of the world? There will just be more DeepSeek and more advances until the US loses its standing.
I believe that the fact that Chinese models are beating the crap of of Llama means it's a huge no.
Why? The Chinese are very capable. Most DL papers have at least one Chinese name on it. That doesn't mean they are Chinese but it's telling.
most papers are also written in the same language, what's your point?
is an american model chinese because chinese people were in the team?
What are these chinese labs made of?
500 remote indian workers (/s)
There is no need for that tone here.
LeCun was always part of FAIR, doing research, not part of the LLM/product group, who reported to someone else.
Wasn't the original LLaMA developed by FAIR Paris?
I hadn't heard that, but he was heavily involved in a cancelled project called Galactica that was an LLM for scientific knowledge.
then we should ask: will Meta come close enough to the fulfillment of the promises made, or will it keep achieving good enough outcomes?
This is the right take. He is obviously a pioneer and much more knowledgeable than Wang in the field, but if you don't have the product mind to serve company's business interest in short term and long term capacity anymore, you may as well stay in academia and be your own research director, let alone a chief executive in one of the largest public companies
Meta had a two prong AI approach - product-focused group working on LLMs, and blue-sky research (FAIR) working on alternate approaches, such as LeCun's JEPA.
It seems they've given up on the research and are now doubling down on LLMs.
Product companies with deprioritized R&D wings are the first ones to die.
Apple doesn't have an "R&D wing". It's a bad idea to split your company into the cool part and the boring part.
Hasn't happened to Google yet
Has Google depriortized R&D?
None of Meta's revenue has anything to do with AI at all. (Other than GenAI slop in old people's feeds.) Meta is in the strange position of investing very heavily in multiple fields where they have no successful product: VR, hardware devices, and now AI. Ad revenue funds it all.
LLMs help ads efficiency a lot. policy labels, targeting, adaptive creatives, landing page evals, etc.
Underrated comment
LeCun truly believes the future is in world models. He’s not alone. Good for him to now be in the position he’s always wanted and hopefully prove out what he constantly talks about.
He seems stuck in the GOFAI development philosophy where they just decide humans have something called a "world model" because they said so, and then decide that if they just develop some random thing and call it a "world model" it'll create intelligence because it has the same name as the thing they made up.
And of course it doesn't work. Humans don't have world models. There's no such thing as a world model!
LLM hostility was warrented. The overhype/downright charlartan nature of ai hype and marketing threatens another AI winter. It happened to cybernetics, it'll happen to us too. The finance folks will be fine, they'll move to the next big thing to overhype, it is the researchers who suffer the fall-out. I am considered anti LLM (transformers anyway) for this reason, i like the the architecture, it is cool amd rather capable at its problem set, which is a unique set, but, it isnt going to deliver any of what has been promised, any more than a plain DNN or a CNN will.
Yann was never a good fit for Meta.
Agreed, I am surprised he is happy to stay this long. He would have been on paper a far better match at a place like pre-Gemini-era Google
Yann was in charge of FAIR which has nothing to do with llama4 or the product focussed AI orgs. In general your comment is filled with misrepresentations. Sad.
Lecun has also consistently tried to redefine open source away from the open source definition.
I totally agree. He appeared to act against his employer and actively undermined Meta's effort to attract talent by his behavior visible on X.
And I stopped reading him, since he - in my opinion - trashed on autopilot everything 99% did - and these 99% were already beyond the two standard deviation of greatness.
It is even more highly problematic if you have absolutely no results eg products to back your claims.
tbf, transformers from more of a developmental perspective are hugely wasteful. they're long-range stable sure, but the whole training process requires so much power/data compared to even slightly simpler model designs I can see why people are drawn to alternative complex model designs down-playing the reliance on pure attention.
He is also not very interested in LLMs, and that seems to be Zuck's top priority.
Yeah I think LeCun is underestimating the impact that LLM's and Diffusion models are going to have, even considering the huge impact they're already having. That's no problem as I'm sure whatever LeCun is working on is going to be amazing as well, but an enterprise like Facebook can't have their top researcher work on risky things when there's surefire paths to success still available.
I politely disagree - it is exactly an industry researcher's purpose to do the risky things that may not work, simply because the rest of the corporation cannot take such risks but must walk on more well-trodden paths.
Corporate R&D teams are there to absorb risk, innovate, disrupt, create new fields, not for doing small incremental improvements. "If we know it works, it's not research." (Albert Einstein)
I also agree with LeCun that LLMs in their current form - are a dead end. Note that this does not mean that I think we have already exploited LLMs to the limit, we are still at the beginning. We also need to create an ecosystem in which they can operate well: for instance, to combine LLMs with Web agents better we need a scalable "C2B2C" (customer delegated to business to business) micropayment infrastructure, because as these systems have already begun talking to each other, in the longer run nobody would offer their APIs for free.
I work on spatial/geographic models, inter alia, which by coincident is one of the direction mentioned in the LeCun article. I do not know what his reasoning is, but mine was/is: LMs are language models, and should (only) be used as such. We need other models - in particular a knowledge model (KM/KB) to cleanly separate knowledge from text generation - it looks to me right now that only that will solve hallucination.
Knowledge models, like ontologies, always seem suspect to me; like they promise a schema for crisp binary facts, when the world is full of probabilistic and fuzzy information loosely categorized by fallible humans based on an ever slowly shifting social consensus.
Everything from the sorites paradox to leaky abstractions; everything real defies precise definition when you look closely at it, and when you try to abstract over it, to chunk up, the details have an annoying way of making themselves visible again.
You can get purity in mathematical models, and in information systems, but those imperfectly model the world and continually need to be updated, refactored, and rewritten as they decay and diverge from reality.
These things are best used as tools by something similar to LLMs, models to be used, built and discarded as needed, but never a ground source of truth.
>Knowledge models, like ontologies, always seem suspect to me; like they promise a schema for crisp binary facts, when the world is full of probabilistic and fuzzy information loosely categorized by fallible humans based on an ever slowly shifting social consensus.
I don't disagree that the world is full of fuzziness. But the problem I have with this portrayal is that formal models are often normative rather than analytical. They create reality rather than being an interpretation or abstraction of reality.
People may well have a fuzzy idea of how their credit card works, but how it really works is formally defined by financial institutions. And this is not just true for software products. It's also largely true for manufactured products. Our world is very much shaped by artifacts and man-made rules.
Our probabilistic, fuzzy concepts are often simply a misconception. That doesn't mean it's not important of course. It is important for an AI to understand how people talk about things even if their idea of how these things work is flawed.
And then there is the sort of semi-formal language used in legal or scientific contexts that often has to be translated into formal models before it can become effective. Law makers almost never write algorithms (when they do, they are often buggy). But tax authorities and accounting software vendors do have to formally model the language in the law and then potentially change those formal definitions after court decisions.
My point is that the way in which the modeled, formal world interacts with probabilistic, fuzzy language and human actions is complex. In my opinion we will always need both. AIs ultimately need to understand both and be able to combine them just like (competent) humans do. AI "tool use" is a stop-gap. It's not a sufficient level of understanding.
> People may well have a fuzzy idea of how their credit card works, but how it really works is formally defined by financial institutions.
> Our probabilistic, fuzzy concepts are often simply a misconception.
How eg a credit card works today is defined by financial institutions. How it might work tomorrow is defined by politics, incentives, and human action. It's not clear how to model those with formal language.
I think most systems we interact with are fuzzy because they are in a continual state of change due to the aforementioned human society factors.
To some degree I think that our widely used formal languages may just be insufficient and could be improved to better describe change.
But ultimately I agree with you that this entire societal process is just categorically different. It's simply not a description or definition of something, and therefore the question of how formal it can be doesn't really make sense.
Formalisms are tools for a specific but limited purpose. I think we need those tools. Trying to replace them with something fuzzy makes no sense to me either.
Is it that fuzzy though? If it was would language not adequately grasp and model our realities? And what about the physical world itself: animals are modeling the world adequately enough to navigate it. There's significant gains to make from modeling _enough_ of the world, without falling into hallucinations of purely statistical associations of an LLM.
World models are trivial. eg narratives are world models and they provide only pre frontal simulation, ie they are synthetically prey-predation. No animal uses world models for survival and doubtful they exist (maps are not models), a world model doesn't conform to optic flow, ie instantaneous use and response. Anything like a world model isn't shallow, the basic premise of oscillatory command, it's needlessly deep, nothing like brains. This is just a frontier hail-mary to the current age.
You're basically describing the knowledge problem vs model structure, how to even begin to design a system which self-updates/dynamically-learns vs being trained and deployed.
Cracking that is a huge step, pure multi-modal trained models will probably give us a hint, but I think we're some ways from seeing a pure multi-modal open model which can be pulled apart/modified. Even then they're still train and deploy not dynamically learning. I worry we're just going to see LSTM design bolted onto deep LLM because we don't know where else to go and it will be fragile and take eons to train.
And less said about the crap of "but inference is doing some kind of minimization within the context window" the better, it's vacuous and not where great minds should be looking for a step forwards.
I have vague notions of there being an entire hidden philosophical/political battlefield (massacre?) behind the whole "are knowledge models/ontologies a realistic goal" debate.
Starting with the sophomoric questions of the optimist who mistakes the possible for the viable: how definite of a thing is "the world", how knowable is it, what is even knowledge... and then back through the more pragmatic: by whom is it knowable, to what degree, and by what means. The mystics: is "the world" the same thing as "the sum of information about the world"? The spooks: how does one study those fields of information which are already agentic and actively resist being studied by changing themselves, such as easily emerge anywhere more than n(D) people gather?
Plenty of food for thought from why ontologies are/aren't a thing. The classical example of how this plays out in the market being search engines winning over internet directories. But that's one turn of the wheel. Look at what search engines grew into quarter century later. What their outgrowths are doing to people's attitude towards knowledge. Different timescale, different picture.
Fundamentally, I don't think human language has sufficient resolution to model large spans of reality within the limited human attention span. The physical limits of human language as information processing device have been hit at some point in the XX century. Probably that 1970s divergence between productivity and wages.
So while LLMs are "computers speak language now" and it's amazing if sad that they cracked it by more data and not by more model, what's more amazing is how many people are continually ready to mistake language for thought. Are they all P-zombies or just obedience-conditioned into emulating ones?!?!?
Practically, what we lack is not the right architecture for "big knowing machine", but better tools for ad-hoc conceptual modeling of local situations. And, just like poetry that rhymes, this is exactly what nobody has a smidgen of interest to serve to consumers, thus someone will just build it in their basement in the hope of turning the tables on everyone. Probably with the help of LLMs as search engines and code generators. Yall better hurry. They're almost done.
Nice commentary and I enjoyed the poetic turn of phrase. I had to respond to it with my own thoughts if only to bookmark it for myself.
> how many people are continually ready to mistake language for thought
This is a fundamental illusion - where, rote memory and names and words get mistaken for understanding. This was wonderfully illustrated here [1]. Few really grok what understanding actually is. This is an unfortunate by-product of the education system we have.
> Are they all P-zombies or just obedience-conditioned into emulating ones?!?!?
Brilliant way to state the fundamental human condition. ie, we are all zombies obedience conditioned to imitate rather than understand. Social media amplifies the zombification, and now LLMs too.
> Starting with the sophomoric questions of the optimist who mistakes the possible for the viable
This is the fundamental tension between operationalized meaning and imagination wherein a grokking soul gathers mists from the cosmic chaos and creates meaning and then continually adapts it.
> it's amazing if sad that they cracked it by more data and not by more model
I was speaking to experts in the sciences (chemistry) and they were shocked that the underlying architecture is brute force, and they expected some theory which turns out to be compact and information compressed not by brute-force but by theorization.
> The physical limits of human language as information processing device have been hit at some point in the XX century
2000 years back when humans realized that formalism was needed to operationalize meaning, and natural language was too vague to capture and communicate it. Only because the world model that natural language captures encompasses "everything" whereas for making it "useful" requires to limit the world model via formalism.
[1] https://news.ycombinator.com/item?id=2483976
> it is exactly a researcher's purpose to do the risky things that may not work
Maybe at university, but not at a trillion dollar company. That job as chief scientist is leading risky things that will work to please the shareholders.
They knew what Yann LeCun was when they hired him. If anything, those brilliant academics who have done what they're told and loyally pursued corporate objectives the way the corporation wanted (e.g. Karpathy when he was at Tesla) haven't had great success either.
>They knew what Yann LeCun was when they hired him.
Yes but he was hired in the ZIRP era where all SV companies were hiring every opinionated academic and giving them free reign and unlimited money to burn in the hopes that maybe they'll create the next big thing for them eventually.
These are very different economic times right now, after the FED infinite money glitch has been patched out, so now people do need to adjust to them and start actually making some products of value for their seven figure costs to their employers, or end up being shown the door.
Some employees even need to physically present at the office
so your message is to short OpenAI before it implodes and gets absorbed into Cortana or equivalent ;)
Unless you're an insider, currently you'd need to express that short via something else.
“Risky things that will work” - contradiction in terms. If companies only did things they knew would work, we probably still wouldn’t have microchips.
Also, like… it’s Facebook. It has a history of ploughing billions into complete nonsense (see metaverse). It is clearly not particularly risk averse.
> risky things that will work
Things known to work are not risky. Risky things can fail by definition.
What exactly does it mean for something to be a "risky thing that will work"?
> I also agree with LeCun that LLMs in their current form - are a dead end.
Well then you and he are clearly dead wrong.
Either that, or just tautological, given that LLM tech is continually morphing and improving.
LLMs and Diffusion solve a completely different problem than world models.
If you want to predict future text, you use an LLM. If you want to predict future frames in a video, you go with Diffusion. But what both of them lack is object permanence. If a car isn't visible in the input frame, it won't be visible in the output. But in the real world, there are A LOT of things that are invisible (image) or not mentioned but only implied (text) that still strongly affect the future. Every kid knows that when you roll a marble behind your hand, it'll come out on the other side. But LLMs and Diffusion models routinely fail to predict that, as for them the object disappears when it stops being visible.
Based on what I heard from others, world models are considered the missing ingredient for useful robots and self-driving cars. If that's halfway accurate, it would make sense to pour A LOT of money into world models, because they will unlock high-value products.
Sure, if you only consider the model they have no object permanence. However you can just put your model in a loop, and feed the previous frame into the next frame. This is what LLM agent engineers do with their context histories, and it's probably also what the diffusion engineers do with their video models.
Messing with the logic in the loop and combining models has an enormous potential, but it's more engineering than researching, and it's just not the sort of work that LeCun is interested in. I think the conflict lies there, that Facebook is an engineering company, and a possible future of AI lies in AI engineering rather than AI research.
>But what both of them lack is object permanence.
This is something that was true last year, but hanging on by a thread this year. Genie shows this off really well, but it's also in the video models as well.[1]
[1]https://storage.googleapis.com/gdm-deepmind-com-prod-public/...
I think World models is way to go for Super Intelligence. One of teh patent i saw already going in this direction for Autonomous mobility is https://patents.google.com/patent/EP4379577A1 where synthetic data generation (visualization) is missing step in terms of our human intelligence.
This is the first time I have heard of world models. Based on my brief reading it does look like this is the idea model for autonomous driving. I wonder if the self driving companies are already using this architecture or something close to it.
I thoroughly disagree, I believe world models will be critical in some aspect for text generation too. A predictive world model you can help to validate your token prediction. Take a look at the Code World Model for example.
lol what is this? We already have world models based on diffusion and ar algorithms.
> I think LeCun is underestimating the impact that LLM's and Diffusion models
No, I think hes suggesting that "world models" are more impactful. The issue for him inside meta is that there is already a research group looking at that, and are wildly more successful (in terms of getting research to product) and way fucking cheaper to run than FAIR.
Also LeCun is stuck weirdly in product land, rather than research (RL-R) which means he's not got the protection of Abrash to isolate him from the industrial stupidity that is the product council.
> but an enterprise like Facebook can't have their top researcher work on risky things when there's surefire paths to success still available.
Bell Labs
> Facebook can't have their top researcher work on risky things when there's surefire paths to success still available.
How did you determine that "surefire paths to success still available"? Most academics agree that LLMs (or LLMs alone) are not going to lead us to AGI. How are you so certain?
I don't believe we need more academic research to achieve AGI. The sort of applications that are solving the recent AGI challenges are just severely resource constrained AGI. The only difference between those systems and human intelligence are resources and incentives.
Not that I believe AGI is the measure of success, there's probably much more efficient ways to achieve company goals than simulating humans.
not sure I agree. AI seems to be following the same 3-stage path of many inventions: innovation > adoption > diffusion. LeCun and co focus on the first, and LLMs in their current form appear to be incremental at improvements; we're still using the same basis from more than ten years ago. FB and industry are signalling a focus on harvesting the innovation and that could last - but also take - many years or decades. Your fundamental researchers are not interested (or the right people) in that position.
Unless I've missed a few updates, much of the JEPA stuff didn't really bear a lot of fruit in the end.
He's quoted in OP as calling them 'useful but fundamentally limited'; that seems correct, and not at all like he's denying their utility.
>the huge impact they're already having
In the software development world yes, outside of that, virtually none. Yes, you can transcribe a video call in Office, yes, but that's not ground breaking. I dare you to list 10 impacts on different fields, excluding tech and including at least half blue collar fields and at least half white collar fields , at different levels from the lowest to the highest in the company hierarchy, that LLM/Diffusion models are having. Impact here specifically means a significant reduction of costs or a significant increase of revenue. Go on
I'm also not sure it even drives a ton of value in software engineering. It makes the easy part easier and the hard part harder. Typing out software in your mind was never the difficult part. Figuring out what to write, how to interpret specs in context, how to make your code work within the context of a broader whole, how to be extensible, maintainable, reliable, etc. That's hard, and LLMs really don't help.
Even when writing, it shifts the mental burden from an easy thing (writing code) to a very hard thing (reading that code, validating it's right, hallucination free, and then refactoring it to match your teams code style and patterns).
It's great for building a first-order approximation of a tech demo app that you then throw out and build from scratch, and auto-complete. In my experience, anyways. I'm sure others have had different experiences.
You already mentioned two fields they have a huge impact on, software development and NLP (this latter one the most impacted so far). Another field that comes to mind is academic research is getting an important boost as well, via semantic search or more advanced stuff like Google's biological cell model which already uncovered new treatments. I'm sure I'm missing a lot of other fields I'm less familiar with (legal, for example). But just these impacts I listed are all huge and they will indirectly have a huge impact on all other areas of human industry, it's just a matter of time. "Software will eat the world" and all that.
Personally, I find myself using LLMs more than Google now, even for non-development tasks. I think this shift is going to become the new normal (if it isn't already).
And what's the end result? All one can see is just bigger representation of those who confidently subscribe to false information and become arrogant when their validity is questioned, as the LLM writing style has convinced them it's some sort of authority. Even people on this website are so misinformed to believe that ChatGPT has developed its own reasoning, despite it being at the core an advanced learning algorithm trained on a enormous amount of human generated data.
And let's not speak about those so deep into sloth that put it into use to deteriorate, and not augment as they claim to do, humane creative recreational activities.
https://archive.ph/fg7HE
I don't think you'll find many here believing anything outside tech is worth investing into, it's schizophrenic isn't it.
While I agree with your point, “Superintelligence” is a far cry from what Meta will end up delivering with Wang in charge. I suppose that, at the end of the day, it’s all marketing. What else should we expect from an ads company :?
The Meta Super-Intelligence can dwell in the Metaverse with the 23 other active users there.
Hard to tell.
The last time LeCun disagreed with the AI mainstream was when he kept working on neural net when everyone thought it was a dead end. He might be entirely right in his LLM scepticism. It's hardly a surefire path. He didn't prevent Meta from working on LLM anyway.
The issue is more than his position is not compatible with short term investors expectations and that's fatal in a company like Meta at the position LeCun occupies.
Yeah honestly I'm with the LLM people here
If you think LLMs are not the future then you need to come with something better
If you have a theoretical idea that's great, but take to at least GPT2 level first before writing off LLMs
Theoretical people love coming up with "better ideas" that fall flat or have hidden gotchas when they get to practical implementation
As Linus says, "talk is cheap, show me the code".
Do you? Or is it possible to acknowledge a plateau in innovation without necessarily having an immediate solution cooked-up and ready to go?
Are all critiques of the obvious decline in physical durability of American-made products invalid unless they figure out a solution to the problem? Or may critics of a subject exist without necessarily being accredited engineers themselves?
LLM's are probably always going to be the fundamental interface, the problem they solved was related to the flexibility of human languages allowing us to have decent mimikry's.
And while we've been able to approximate the world behind the words, it's just full of hallucinations because the AI's lack axiomatic systems beyond much manually constructed machinery.
You can probably expand the capabilties by attaching to the front-end but I suspect that Yann is seeing limits to this and wants to go back and build up from the back-end of world reasoning and then _among other things_ attach LLM's at the front-end (but maybe on equal terms with vision models that allows for seamless integration of LLM interfacing _combined_ with vision for proper autonomous systems).
> because the AI's lack axiomatic systems beyond much manually constructed machinery.
Oh god, that is massively under-selling their learning ability. These models are able to extract and reply with why jokes are funny without even knowing basic vocab, yet there are pure-code models out there with lingual rules baked in from day one which still struggle with basic grammar.
The _point_ of LLMs arguably is there ability to learn any pattern thrown at it with enough compute. With an exception to learning how logical processes work, and pure LLMs only see "time" in the sense of a paragraph begins and ends.
At the least they have taught computers, "how to language", which in regards to how to interact with a machine is a _huge_ step forward.
Unfortunately the financial incentives are split between agentic model usage (taking the idea of a computerised butler further), maximizing model memory and raw learning capacity (answering all problems at any time), and long-range consistency (longer ranges give better stable results due to a few reasons, but we're some way from seeing an LLM with a 128k experts and 10e18 active tokens).
I think in terms of building the perfect monkey butler we already have most or all of the parts. With regard to a model which can dynamically learn on the fly... LLMs are not the end of the story and we need something to allow the models to more closely tie their LS with the context. Frankly the fact that DeepSeek gave us an LLM with LS was a huge leap since previous model attempts had been overly complex and had failed in training.
Why not both? LLM:s probably have a lot more potential than what is currently being realized but so does world models.
Isn't that exactly why he's starting a new company?
LLMs are the present. We will see what the future holds.
Of course the challenge with that is it's often not obvious until after quite a bit of work and refinement that something else is, in fact, better.
Well, we will see if Yann can.
The role of basic research is to get off the beaten path.
LLMs aren’t basic research when they have 1 billion users
That was obviously him getting sidelined. And it's easy to see why.
LLMs get results. None of the Yann LeCun's pet projects do. He had ample time to prove that his approach is promising, and he didn't.
I agree. I never understood LeCun's statement that we need to pivot toward the visual aspects of things because the bitrate of text is low while visual input through the eye is high.
Text and languages contain structured information and encode a lot of real-world complexity (or it's "modelling" that).
Not saying we won't pivot to visual data or world simulations, but he was clearly not the type of person to compete with other LLM research labs, nor did he propose any alternative that could be used to create something interesting for end-users.
Thats where the research is leading.
The issue is context. trying to make an AI assistant with just text only inputs is doeable but limiting. You need to know the _context_ of all the data, and without visual input most of it is useful.
For example "Where is the other half of this" is almost impossible to solve unless you have an idea of what "this" is.
but to do that you need to have cameras, to use cameras you need to have position, object, and people tracking. And that is a hard problem thats not solved.
the hypothesis is that "world models" solve that with an implicit understanding of the worl and the objects in context
Text and language contain only approximate information filtered through humans eyes and brains. Also animals don't have language and can show quite advanced capabilities compared to what we can currently do in robotics. And if you do enough mindfulness you can dissociate cognition/consciousness from language. I think we are lured because how important language is for us humans, but intuitively it's obvious to me language (and LLMs) are only a subcomponent, or even irrelevant for say self driving or robotics.
If LeCun's research has made Meta a powerhouse of video generation or general purpose robotics - the two promising directions that benefit from working with visual I/O and world modeling as LeCun sees it - it could have been a justified detour.
But that sure didn't happen.
LLMs get results is quite the bold statement. If they get results, they should be getting adopted, and they should be making money. This is all built on hazy promises. If you had marketable results, you wouldn't have to hide 20+ billion dollars of debt financing into an obscure SPV. LLMs are the most baffling piece of tech. They are incredible, and yet marred by their non-deterministic hallucinatory nature, and bound to fail in adoption unless you convince everyone that they don't need precision and accuracy, but they can do their business at 75% quality, just with less human overhead. It's quite the thing to convince people of, and that's why it needs the spend it's needing. A lot of we-need-to-stay-in-the-loop CEOs and bigwigs got infatuated with the idea, and most probably they just had their companies get addicted to the tech equivalent of crack cocaine. A reckoning is coming.
LLMs get results, yes. They are getting adopted, and they are making money.
Frontier models are all profitable. Inference is sold with a damn good margin, and the amounts of inference AI companies sell keeps rising. This necessitates putting more and more money into infrastructure. AI R&D is extremely expensive too, and this necessitates even more spending.
A mistake I see people make over and over again is keeping track of the spending but overlooking the revenue altogether. Which sure is weird: you don't get from $0B in revenue to $12B in revenue in a few years by not having a product anyone wants to buy.
And I find all the talk of "non-deterministic hallucinatory nature" to be overrated. Because humans suffer from all of that too, just less severely. On top of a number of other issues current AIs don't suffer from.
Nonetheless, we use human labor for things. All AI has to do is provide a "good enough" alternative, and it often does.
> Frontier models are all profitable.
They generate revenue, but most companies are in the hole for the research capital outlay.
If open source models from China become popular, then the only thing that matters is distribution / moat.
Can these companies build distribution advantage and moats?
> Frontier models are all profitable.
This is an extraordinary claim and needs extraordinary proof.
LLMs are raising lots of investor money, but that's a completely different thing from being profitable.
Dario Amodei from Anthropic has made the claim that if you looked at each model as a separate business, it would be profitable [1], i.e. each model brings in more revenue than the total of training + inference costs. It's only because you're simultaneously training the next generation of models, which are larger and more expensive to train, but aren't generating revenue yet, that the company as a whole loses money in a given year.
Now, it's not like he opened up Anthropic's books for an audit, so you don't necessarily have to trust him. But since he's the CEO of Anthropic, you do need to believe that either (a) what he is saying is roughly true or (b) he is making the sort of fraudulent statements that would probably get you sent to prison.
[1] https://www.youtube.com/watch?v=GcqQ1ebBqkc&t=1014s
He's speaking in a purely hypothetical sense. The title of the video even makes sure to note "in this example". If it turned this wasn't true of anthropic, it certainly wouldn't be fraud.
You don't even need insider info - it lines up with external estimates.
We have estimates that range from 30% to 70% gross margin on API LLM inference prices at major labs, 50% middle road. 10% to 80% gross margin on user-facing subscription services, error bars inflated massively. We also have many reports that inference compute has come to outmatch training run compute for frontier models by a factor of x10 or more over the lifetime of a model.
The only source of uncertainty is: how much inference do the free tier users consume? Which is something that the AI companies themselves control: they are in charge of which models they make available to the free users, and what the exact usage caps for free users are.
Adding that up? Frontier models are profitable.
This goes against the popular opinion, which is where the disbelief is coming from.
Note that I'm talking LLMs rather than things like image or video generation models, which may have vastly different economics.
what about training?
I literally mentioned that:
> We also have many reports that inference compute has come to outmatch training run compute for frontier models by a factor of x10 or more over the lifetime of a model.
In this comment you proceeded to basically reinvent the meaning of "profitable company", but sure. I won't even get into the point of comparing LLM to humans, because I choose not to engage with whoever doesn't have the human decency, humanistic compass, or basic phylosophical understanding of how putting LLMs and human labor on the same level to justify hallucinations and non-determinism is deranged and morally bankrupt.
You should go and work in a call center for a year, on the first line.
Then come back and tell me how replacing human labor with AI is "deranged and morally bankrupt".
red herring. just because some jobs are bad (maybe shouldn't exist like that in the first place) doesn't make this movement humanistic
OpenAI and Anthropic are making north of 4B/year revenue so some companies have figured out the money making part. ChatGPT has some 800M users according to some calculations. Whether it's enough money today, enough money tomorrow, is of course a question but there is a lot of money. Users would not use them in a scale if they do not solve their problems.
OpenAI lost 12bn last quarter
It’s easy to make 1 billion by spending 10 billion. That’s not “making money” though, it is lighting it on fire.
People used to say this about Amazon all the time. Remember how Amazon basically didn’t turn any real profits for 2 decades? The joke was that Amazon was a charitable organisation being funded by Wall Street for the benefit of human kind.
That didn’t last. People in the know knew that once you have a billion users and insane revenue and market power and have basically bought or driven out of business most of your competitors (Diapers.com, Jet.com, etc) you can eventually slow down your physical expansion, tighten the screws on your suppliers, increase efficiencies, and start printing money.
The VCs who are funding these companies are hoping that they have found the next Amazon. Many will probably go out of business, but some might join the ranks of trillion dollar companies.
So every company that doesn't turn any profits is actually Amazon in disguise?
this gets brought up a lot, and the reality is that the scale of Amazon's losses is completely dwarfed by what's going on now
There is someone else at Facebook who's pet projects do not get results...
If you hire a house cleaner to clean your house, and the cleaner didn't do well, would you eject yourself out of the house? You would not. You would change to a new cleaner.
But if we hire someone to deal on R&D to automate fully the house cleaning process, we might not necessarily expect the office to be maintained in clean state by the researchers themselves any time we enter the room.
Sure, but that "someone else" is the man writing the checks. If the roles were reversed, he'd be the one being fired now.
Who are you referring to?
I think he means Zuckerberg himself, the metaverse isn't exactly a major success, but this is a false equivalency the way he organized it only his vote matters he does what he wants
LeCun is great and smart, of course. But he had his chance. It didn't go that well. Now Zuck wants somebody else to try.
Messi is the best footballer of our era. It doesn't mean he would play well in any team.
Messi would only play well in Barcelona. Lecunn can produce high quality research anywhere. It's not a great comparison.
I don't think Messi could do it on a wet night in Stoke. Ronaldo could, though.
/s
> But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.
When did they make groundbreaking foundation models though? DeepMind and OpenAI have done plenty of revolutionary things, what did Meta AI do while being led by LeCun?
Zuck hired John Carmack and got nothing of it On the other hand, it was only lecunn avoiding meta to go 100p evil creepy mode too
Carmack laid the foundation for the all-in-one VR headsets.
Hopefully one day, in a galaxy far far away, someone builds something on those foundations.
You joke, but the Star Wars games - especially the pinball one, for me at least - are some of the best experiences available on Quest headsets. I've been playing software pinball (as well as the real thing) since the 80s, and this is one of my favorite ways to do it now, which I will keep coming back to.
And Carmack complained about the bureaucracy hell that is Facebook.
What does Meta even want with AI?
I suppose they could solve superintelligence and cure cancer and build fusion reactors with it, but that's 100% outside their comfort zone - if they manage to build synthethic conversation partners and synthethic content generators as good or better than the real thing the value of having every other human on the planet registered to one of their social network goes to zero.
Which is impossible anyway - I facebook to maintain real human connections and keep up with people who I care about, not to consume infinite content.
At 1.6T market cap it's very hard to 10x or greater the company anymore doing what's in their comfort zone and they've got a lot of money to play with to find easier to grow opportunities. If Zuckerberg was convinced he could do that by selling toothpicks they'd have a go at the toothpick business. They went after the "metaverse" first, then AI. Both are just very fast growth options which happen to be tech focused because that's the only way you generate new comparable value as a company (unless you're sitting on a lot of state owned oil) in the current markets.
You missed an opportunity to use paperclips instead of toothpicks, as your example.
Would be very inline with the AI angle.
they are out for your clicks and attention minutes
if OpenAI can build a "social" network of completely generated content, that can kill Meta. Even today I venture to guess that most of the engagements in their platforms is not driven by real friends, so an AI driven platform won't be too different, or it might make content generation be so easy as to make your friends engage again.
Apart from it the ludicrous vision of the metaverse seems much more plausible with highly realistic world models
How do LLMs help with clicks and attention minutes? Why do they spend $100+B a year in AI capex, more than Google and Microsoft that actually rent AI compute to clients? What are they going to do with all that compute? It’s all so confusing
Browse TikTok and you already see AI generated videos popping up. Could well be that the platforms with the most captivating content will not be a "social" network but one consisting of some tailor made feed for you. That could undermine the business model of the existing social networks - unless they just fill it with AI generated content themselves. In other words: Facebook should really invest in good video generating models to keep their platforms ahead.
It might be just me, but in my opinion facebook platforms are way past the "content from your friends phase", but is full of cheap peddled viral content.
If that content becomes even cheaper, of higher quality and highly tailored to you, that is probably worth a lot of money, or at least worth not losing your entire company by a new competitor
But practically speaking, is Meta going to be generating text or video content itself? Are they going to offer some kind of creator tools so you can use it to create video as a user and they need the compute for that? Do they even have a video generation model?
The future is here folks, join us as we build this giant slop machine in order to sell new socks to boomers.
For all of your questions Meta would need a huge research/GPU investment, so that still holds.
In any case if I have to guess, we will see shallow things like the Sora app, a video generation tiktok social network and deeper integration like fake influencers, content generation that fits your preferences and ad publishers preferences
a more evil incarnation of this might be a social network where you aren't sure who is real and who isn't. This will probably be a natural evolution of the need to bootstrap a social network with people and replacing these with LLMs
Sad to hear it has come to attention minutes, used to be seconds.
> slopware
Damn did you just invent that? That's really catchy.
Slop is already a noun.
I won't be surprised if Musk hires him. But I hear LeCun hates the guts of Musk.
Musk doesn't appear interested in AI research - he's basically doing the same as Meta and just pursuing me-too SOTA LLMs and image generation at X.ai.
Musk wants people who can deliver results, and fast.
If LeCun can't cough up some research that's directly applicable to Grok or Optimus, Musk wouldn't want him.
Would love to have been a fly on the wall during one of their 1:1’s.
When I first saw their LLM integration on Facebook I thought the screenshot was fake and a joke
Yes, that was such a bizarre move.
Zuck did this on purpose, humiliating LeCun so he would leave. Despite LeCun being proved wrong on LLMs capabilities such as reasoning, he remained extremely negative, not exactly inspiring leadership to the Meta Ai team, he had to go.
But LLMs still can't reason... in a reasonable sense. No matter how you look at it, it is still a statistical model that guesses next word, it doesn't think/reason per se.
It does not guess the next word, the sampler chooses subword tokens. Your explanation can't even explain why it generates coherent words.
Meta had John Carmack and squandered him. It seems like Meta can get amazing talent but has no idea how to get any value or potential out of them.
Oh wow, is that true? They made him report to the directory of the Slop Factory? Brilliant!
No, it was because LeCun had no talent for running real life teams and was stuck in a weird place where he hated LLMs. He frankly was wasting Meta’s resources. And making him report to Wang was a way to force him out.
Zuckerberg knows what he wants but he rarely knows how to get it. That's been his problem all along. Unlike others he isn't scared to throw ridiculous amounts of money at a problem though and buy companies who do things he can't get done himself.
There's also the aspect of control - because of how the shares and ownership are organized he answers essentially to no one. In other companies burning this much cash as was with VR or now AI without any sensible results would get him ejected a long time ago.
It wasn’t boneheaded. It was done to make Yann leave. Meta doesn’t want Yann for good reason.
Yann was largely wrong about AI. Yann coined the term stochastic parrot and derrided LLMs as a dead end. It’s now utterly clear the amount of utility LLMs have and that whatever these LLMs are doing it is much more than stochastic parroting.
I wouldn’t give money to Yann, the guy is a stubborn idiot and closed minded. Whatever he’s doing wont even touch LLM technology. He was so publicly deriding LLMs I see no way he will back pedal from that.
I dont think LLMs are the end of the story for agi. But I think they are a stepping stone. Whatever agi is in the end, LLMs or something close to it will be a modular component of aspect of the final product. For LeCunn to dismiss even the possibility of this is idiotic. Horrible investment move to give money to Yann to likely pursue Agi without even considering LLMs.
Good. The world model is absolutely the right play in my opinion.
AI Agents like LLMs make great use of pre-computed information. Providing a comprehensive but efficient world model (one where more detail is available wherever one is paying more attention given a specific task) will definitely eke out new autonomous agents.
Swarms of these, acting in concert or with some hive mind, could be how we get to AGI.
I wish I could help, world models are something I am very passionate about.
Can you explain this “world model” concept to me? How do you actually interface with a model like this?
A world model is a persistent representation of the world (however compressed) that is available to an AI for accessing and compute. For example, a weather world model would likely include things like wind speed, surface temperature, various atmospheric layers, total precipitable water, etc. Now suppose we provide a real time live feed to an AI like an LLM, allowing the LLM to have constant, up to date weather knowledge that it loads into context for every new query. This LLM should have a leg up in predictive power.
Some world models can also be updated by their respective AI agents, e.g. "I, Mr. Bot, have moved the ice cream into the freezer from the car" (thereby updating the state of freezer and car, by transferring ice cream from one to the other, and making that the context for future interactions).
One theory of how humans work is the so called predictive coding approach. Basically the theory assumes that human brains work similar to a kalman filter, that is, we have an internal model of the world that does a prediction of the world and then checks if the prediction is congruent with the observed changes in reality. Learning then comes down to minimizing the error between this internal model and the actual observations, this is sometimes called the free energy principle. Specifically when researchers are talking about world models they tend to refer to internal models that model the actual external world, that is they can predict what happens next based on input streams like vision.
Why is this idea of a world model helpful? Because it allows multiple interesting things, like predict what happens next, model counterfactuals (what would happen if I do X or don't do X) and many other things that tend to be needed for actual principled reasoning.
Learning Algorithm Of Biological Networks
https://www.youtube.com/watch?v=l-OLgbdZ3kk
In this video we explore Predictive Coding – a biologically plausible alternative to the backpropagation algorithm, deriving it from first principles.
Predictive coding and Hebbian learning are interconnected learning mechanisms where Hebbian learning rules are used to implement the brain's predictive coding framework. Predictive coding models the brain as a hierarchical system that minimizes prediction errors by sending top-down predictions and bottom-up error signals, while Hebbian learning, often simplified as "neurons that fire together, wire together," provides a biologically plausible way to update the network's weights to improve predictions over time.
Learning from the real world, including how it responds to your own actions, is the only way to achieve real-world competency, intelligence, reasoning and creativity, including going beyond human intelligence.
The capabilities of LLMs are limited by what's in their training data. You can use all the tricks in the book to squeeze the most out of that - RL, synthetic data, agentic loops, tools, etc, but at the end of the day their core intelligence and understanding is limited by that data and their auto-regressive training. They are built for mimicry, not creativity and intelligence.
So... that seems like possible path towards AGI. Doesn't it?
The way I think of it (might be wrong) but basically a model that has similar sensors to humans (eyes, ears) and has action-oriented outputs with some objective function (a goal to optimize against). I think autopilot is the closest to world models in that they have eyes, they have ability to interact with the world (go different directions) and see the response.
The best world model research I know of today is Dreamer 4: https://danijar.com/project/dreamer4/. Here is an interesting interview with the author: https://www.talkrl.com/episodes/danijar-hafner-on-dreamer-v4
Training on 2,500 hours of prerecorded video of people playing Minecraft, they produce a neural net world model of Minecraft. It is basically a learned Minecraft simulator. You can actually play Minecraft in it, with some limitations.
They then train a neural net policy to play Minecraft all the way up to obtaining diamonds. But the policy never actually plays the real game of Minecraft during training. It only plays in the world model. The entire policy is trained in its own imagination. Of course this is why it is called Dreamer.
The advantage of this is that no extra real data is required to train policies. The only input to the system is a relatively small dataset of prerecorded video of people playing Minecraft, and the output is a policy that can achieve specific goals in the world. Traditionally this would require many orders of magnitude more real data to achieve, and the real data would need to be focused on the specific goals you want the policy to achieve. World models are a great way to amplify a small amount of undifferentiated real data into a large amount of goal-directed synthetic data. This is very appealing for domains where it is expensive to gather real data, like robotics. I recommend listening to the interview above if you want to know more.
He is one of these people who think that humans have a direct experience of reality not mediated by as Alan Kay put it three pounds of oatmeal. So he thinks a language model can not be a world model. Despite our own contact with reality being mediated through a myriad of filters and fun house mirror distortions. Our vision transposes left and right and delivers images to our nerves upside down, for gawd’s sake. He imagines none of that is the case and that if only he can build computers more like us then they will be in direct contact with the world and then he can (he thinks) make a model that is better at understanding the world
Isn't this idea demonstrably false due to the existence of various sensory disorders too?
I have a disorder characterised by the brain failing to filter own its own sensory noise, my vision is full of analogue TV-like distortion and other artefacts. Sometimes when it's bad I can see my brain constructing an image in real time rather than this perception happening instantaneously, particularly when I'm out walking. A deer becomes a bundle of sticks becomes a muddy pile of rocks (what it actually is) for example over the space of seconds. This to me is pretty strong evidence we do not experience reality directly, and instead construct our perceptions predictively from whatever is to hand.
The default philosophical position for human biology and psychology is known as Representational Realism. That is, reality as we know it is mediated by changes and transformations made to sensory (and other) input data in a complex process, and is changed sufficiently to be something "different enough" from what we know to be actually real.
Direct Realism is the idea that reality is directly available to us and any intermediate transformations made by our brains is not enough to change the dial.
Direct Realism has long been refuted. There are a number of examples, e.g. the hot and cold bucket; the straw in a glass; rainbows and other epiphenomena, etc.
Pleased to meet someone else who suffers from "visual snow". I'm fortunate in that like my tinnitus, I'm only acutely aware of it when I'm reminded of it, or, less frequently, when it's more pronounced.
You're quite correct that our "reality" is in part constructed. The Flashed Face Distortion Effect [0][1] (wherein faces in the peripheral vision appear distorted due the the brain filling in the missing information with what was there previously) is just one example.
[0] https://en.wikipedia.org/wiki/Flashed_face_distortion_effect [1] https://www.nature.com/articles/s41598-018-37991-9
Only tangentially related but maybe interesting to someone here so linking anyways: Brian Kohberger is a visual snow sufferer. Reading about his background was my first exposure to this relatively underpublicized phenomenon.
https://en.wikipedia.org/wiki/2022_University_of_Idaho_murde...
Ah that's interesting, mine is omnipresent and occasionally bad enough I have to take days off work as I can't read my own code; it's like there's a baseline of it that occasionally flares up at random. Were you born with visual snow or did you acquire it later in life? I developed it as a teenager, and it was worsened significantly after a fever when I was a fresher.
Also do you get comorbid headaches with yours out of interest?
I developed it later in life. The tinnitus came earlier (and isn't as a result of excessive sound exposure as far as I know), but in my (unscientific) opinion they are different manifestations (symptoms) of the same underlying issue – a missing or faulty noise filter on sensory inputs to the brain.
Thankfully I don't get comorbid headaches – in fact I seldom get headaches at all. And even on the odd occasion that I do, they're mild and short-lived (like minutes). I don't recall ever having a headache that was severe, or that lasted any length of time.
Yours does sound much more extreme than mine, in that mine is in no way debilitating. It's more just frustrating that it exists at all, and that it isn't more widely recognised and researched. I have yet to meet an optician that seems entirely convinced that it's even a real phenomenon.
Interesting, definitely agree it likely shares an underlying cause with tinnitus. It's also linked to migraine and was sometimes conflated with unusual forms of migraine in the past, although it's since been found to be a distinct disorder. There's been a few studies done on visual snow patients, including a 2023 fMRI study which implicated regions rich in glutamate and 5HT2A receptors.
I actually suspected 5HT2A might be involved before that study came out, since my visual distortions sometimes resemble those caused by psychedelics. It's also known that both psychedelics and anecdotally from patient's groups SSRIs too can cause a similar symptoms to visual snow syndrome, I had a bad experience with SSRIs for example but serotonin antagonists actually fixed my vision temporarily - albeit with intolerable side-effects so I had to stop.
It's definitely a bit of a faff that people have never heard of it, I had to see a neuro-ophthalmologist and a migraine specialist to get a diagnosis. On the other hand being relatively unknown does mean doctors can be willing to experiment. My headaches at least are controlled well these days.
the fact that a not-so-direct experience of reality produces "good enough results" (eg. human intelligence) doesn't mean that a more-direct experience of reality won't produce much better results, and it clearly doesn't mean it can't produce these better results in AI
your whole reasoning is neither here not there, and attacking a straw man - YLC for sure knows that human experience of reality is heavily modified and distorted
but he also knows, and I'd bet he's very right on this, that we don't "sip reality through a narrow straw of tokens/words", and that we don't learn "just from our/approved written down notes", and only under very specific and expensive circumstances (training runs)
anything closer to more-direct-world-models (as LLMs are ofc at a very indirect level world models) has very high likelihood of yielding lots of benefits
The world model of a language model is a ... language model. Imagine the mind of a blind limbless person, locked in a cell their whole life, never having experienced anything different, who just listens all day to a piped in feed of randomized snippets of WikiPedia, 4chan and math olypiad problems.
The mental model this person has of this feed of words is what an LLM at best has (but human model likely much richer since they have a brain, not just a transformer). No real-world experience or grounding, therefore no real-world model. The only model they have is of the world they have experience with - a world of words.
> humans have a direct experience of reality not mediated by as Alan Kay put it three pounds of oatmeal
Is he advocating for philosophical idealism of the mind or does he has an alternate physicalist theory?
I don't think he actually understands direct realism, idealism, or representational realism as distinctions whatsoever.
That way he may get a very good lizard. Getting Einstein though takes layers of abstraction.
My thinking is that such world models should be integrated with LLM like the lower levels of perception are integrated with higher brain function.
Great strawman.
Ouija board would work for text.
> Swarms of these, acting in concert or with some hive mind, could be how we get to AGI.
There's absolutely no reason to think this. In fact, all of the evidence we have to this point suggests that scaling intelligence horizontally doesn't increase capabilities – you have to scale vertically.
Additionally, as it stands I'd argue there's foundational architectural advancements needed before artificial neutral networks can learn and reason at the same level (or better) than humans across a wide variety of tasks. I suspect when we solve this for LLMs the same techniques could be applied to world models. Fundamentally, the question to ask here is whether AGI is io dependant, and I see no reason to believe this to be the case – if someone removes your eyes and cuts off your hands they don't make you any less generally intelligent.
LeCun, who's been saying LLMs are a dead end for years, is finally putting his money where his mouth is. Watch for LeCun to raise an absolutely massive VC round.
So not his money ;)
But his responsability.
Pretty funny post. He won't be held responsible for any failures. Worst case scenario for this guy is he hires a bunch of people, the company folds some time later, his employees take the responsibility by getting fired, and he sails into the sunset on several yachts.
He is 65, and certainly rich enough to retire many times over. He's not doing this to scam money out of VCs. He wants to prove his ideas work.
So he's not using his own money, and he has enough personal wealth that there is no impact to him if the company fails. It's just another rich guy enjoying his toys. Good on him, I hope he has fun, but the responsibility for failure will be held by his employees, not him.
What is responsibility if you can afford good lawyers?
So you mean that Mark Zuckerberg has always been a peer to YLC in terms of responsibility towards Meta's shareholders?
I mean any entity that can afford good lawyers seems to not care about responsibility in the slightest.
like openAI and all other AI startups?
Putting VCs money into food where his mouth is*
He needs a patient investor and realized Zuck is not that. As someone who delivers product and works a lot with researchers I get the constant tension that might exist with competing priorities. Very curious to see how he does, imho the outcome will be either of the extremes - one of the fastest growing companies by valuation ever or a total flop. Either way this move might advance us to whatever end state we are heading towards with AI.
I think it was a plan by Mark to move LeCun out of Meta. And they cannot fire him without bad PR, so they got Wang to lead him. It was only a matter of time before LeCun moved out.
Isn't putting Wang as leading him a worse PR compared to just letting him go?
Anecdotally: No, I had no idea who he was reporting to so it sounds like a natural moving on storyline.
It’s probably better for the world that LeCun is not at Meta. I mean if his direction is the likeliest approach to AGI meta is the last place where you want it.
It's better that he's not working on LLMs. There's enough people working on it already.
Working under LeCun but outside of Zuckerberg's sphere of influence sure sounds like a dream job.
Really? From where I'm standing LeCun is a pompous researcher who had early success in his career, and has been capitalizing on that ever since. Have you read any of his papers from the last 20 years? 90% of his citations are to his own previous papers. From there, he missed the boat on LLMs and is now pretending everyone else is wrong so that he can feel better about it.
He comes off like the quintessential grey haired ego maniac. Inflexible old minds coupled with decades of self assurance that they are correct.
I cannot remember the quote, but it's something to the effect of "Listen closely to grey haired men when they talk about what is possible, and never listen when they talk about what is impossible."
His research group have introduced some pretty impactful research and open source models.
https://ai.meta.com/research/
For the same reason I don't attribute those successes to Zuckerberg I don't attribute them to LeCun either.
Anyone that has worked at FB knows that LeCun does nothing all day expect post archive links to WP.
His JEPA family of models is a genuine step forward for SSL. Not the only approach, but a very insightful one. You’re very dismissive of his work.
i prefer to work under a pile of shit than zuck.
Every single time I read about an AI related article I'm always disturbed by the same and recurring fact: the ridiculous amounts of money involved and the lousy real world results delivered. It is just simply insane.
It would have been just as interesting to read that he moved over to Google, where the real brains and resources are located at.
Meta is now just competing against giants like OpenAI, Anthropic and Google, plus all the new Chinese companies; I see no real chance for them to offer a popular chat model, but rather to market their AI as a bundled product for companies which want to advertise, where the images and videos will be automatically generated by Meta.
> moved over to Google, where the real brains and resources are located at
Brains yes, outcome? I doubt it. Have you used Gemini?
Yes, successfully many times?
It's not very good
The writing was on the wall when Zuck hired Wang. That combined with LeCun's bearish sentiment on LLMs led to this.
This seems like a good thing for him to get to fully pursue his own ideas independent of Meta. Large incumbents aren’t usually the place for innovating anything far from mainstream considering the risk and cost of failure. The high level idea of JEPA is sound, but it takes a lot of work to get it trained well at scale before it has value to Meta.
In this case where more money / resources seemingly better results (at least right now) this might be a bit different than other fields.
What kind of stock should I buy to profit from LeCun's startup?
Are you an accredited investor? If not, you're probably SOL. Opportunities like this are only for the elites and oligarchs.
Interesting he isn't just working with Feifei Li if he's really interested in 'world models'.
Correct me if I'm wrong but LeCun is focused on learning from video, whereas Fei-Fei Li is doing robotic simulations. Also I think Fei-Fei Li's approach is still using transformers and not buying into JEPA.
Exactly where my mind turned. It's interesting how the AI OG's (Feifei and Cunn) think world models are the way forward.
Will be interesting to see how he fares outside the ample resources of Meta: Personnel, capital, infrastructure, data, etc. Startups have a lot of flexibility, but a lot of additional moving parts. Good luck!
I would love to join his startup, if he hires me, and there are many such people like me, and more talented.
From the outside, it always looked like they gave LeCun just barely enough compute for small scale experiments. They'd publish a promising new paper, show it works at a small scale, then not use it at all for any of their large AI runs.
I would have loved to see a VLM utilizing JEPA for example, but it simply never happened.
I'd be surprised if they didn't scale it up.
The obvious explanation is they have scaled it up, but it turned out to be total shite, like most new architectures.
The current VC climate is interesting. It's virtually impossible to raise a new fund because DPI has been 0% for over a decade and four-digit IRR is cool, but illiquid.
So they're piling gobs of capital into an "AI" company with four customers with the hope that it is the one that becomes the home run (they know it won't, but LPs give you money to deploy it!)
It also means that companies like Yann's potential new one have the best chance in history of being funded, and that's a great thing.
P.S. all VCs outside the top-10 lose against the S&P. While I love that dumb capital is being injected into big, risky bets, surely the other shoe will drop at some point. Or is this just wealth redistribution with extra steps?
I wonder, what LeCun wants to do is more fundamental research, i.e. where the timeline to being useful is much longer, maybe 5-10 years at least, and also much more uncertain.
How does this fit together with a startup? Would investors happily invest into this knowing not to expect anything in return for at least the next 5-10 years?
> Would investors happily invest into this knowing not to expect anything in return for at least the next 5-10 years?
Oh, you mean like OpenAI, Anthropic, Gemini, and xAI? None of them are profitable.
That's a quite different thing, OpenAI has billions of USD/year cash flow, and when you have that there's many many potential way to achieve profitability on different time horizons. It's not a situation of chance but a situation of choice.
Anyway, how much that matters for an investor is hard to form a clear answer to - investors are after all not directly looking for profitability as such, but for valuation growth. The two are linked but not the same -- any investor in OpenAI today probably also places themselves into a game of chance, betting on OpenAI making more breakthroughs and increasing the cash flow even more -- not just becoming profitable at the same rate of cash flow. So there's still some of the same risk baked into this investment.
But with a new startup like LeCun's is going to be, it's 100% on the risk side and 0% on the optionality side. The path to profitability for a startup would be something like 1) a breakthrough is made 2) that breakthrough is utilized in a way that generates cash flow 3) the company becomes profitable (and at this point hopefully the valuation is good.)
There's a lot of things that can go wrong at every step here (aside from the obvious), including e.g. making a breakthrough that doesn't represent a defensible mote for your startup, failing to build the structure of the business necessary to generate cashflow, ... OpenAI et al already have a lot of that behind them, and while that doesn't mean that they don't face upcoming risks and challenges, the huge amount of cashflow they have available helps them overcome these issues far more easily than a startup, which will stop solving problems if you stop feeding money into it.
> That's a quite different thing, OpenAI has billions of USD/year cash flow, and when you have that there's many many potential way to achieve profitability on different time horizons. It's not a situation of chance but a situation of choice.
Talk is cheap. Until they're actually cash flow positive, I'll believe it when I see it
I am surprised he lasted this long.
I wonder if this has anything to do with him spending his day on twitter and getting in online arguments with prominent figures.
Fi Fi Lee also recently founded a new AI startup called World Labs, which focus on creating AI world models with spatial intelligence to understand and interact with the 3D world, unlike current LLM AI that primarily processes 2D images and text. Almost exactly the same focus as Yann LeCun's new venture stated in the parent article.
*Fei-Fei Li
They'd need an order of magnitude more compute in order to train an AI with so much 3D data?
Not necessarily. Training could be more efficient.
I really hope he returns to Europe for his new startup.
He probably wants it to be successful, so that would be a foolish move
Some of the best AI researchers and labs have been from the EU (DeepMind, Alan Turing Institute, Mistral, et al.). We in the US have mature capital markets and stupid easy access to capital, of course, but EU still punches well above its weight when it comes to deep, fundamental research.
"These models aim to replicate human reasoning and understanding of the physical world, a project LeCun has said could take a decade to mature."
What an insane time horizon to define success. I suppose he easily can raise enough capital for that kind of runway.
That guy has survived the AI winter. He can wait 10 years for yet another breakthrough. [but the market can’t]
https://en.wikipedia.org/wiki/AI_winter
We're at most in an "AI Autumn" right now. The real Winter is yet to come.
We have already been through winter.Ffor those of us old enough to remember, the OP was making a very clear statement.
Winter is a cyclical concept, just like all the other seasons. It will be no different here; the pendulum swings back and forth. The unknown factor is the length of the cycle.
Java Spring.
Google summer.
AI autumn.
Nuclear winter.
I assume they’re referring to the previous one.
I still have to understand why you think another AI winter is coming. Everyyyybody is using it, everybody is racing to invent the next big thing. What could go wrong? [apart from a market crash, more related to financial bubble than technical barriers]
> apart from a market crash, more related to financial bubble than technical barriers
_That is what an AI winter is_.
Like, if you look at the previous ones, it's a cycle of over-hype, over-promising, funding collapse after the ridiculous over-promising does not materialise. But the tech tends to hang around. Voice recognition did not change the world in the 90s, but neither did it entirely vanish once it was realised that there had been over-promising, say.
A pretty short time horizon for actual research. Interesting to see it combined with the SV/VC world, though.
I suspect he sees a lot of scattered pieces of fundamental research outside of LLM's that he thinks could be integrated for a core within a year, the 10 years is to temper investors (that he can buy leeway for with his record) and fine tune and work out the kinks when actually integrating everything that might not have some obvious issues.
Zuck is a business guy, understandable that this isn't going to fly with him
10 years is nothing.
Are you some kind of timeless being? it's a meaningful fraction of a human life
Right choice IMO. LLMs aren’t going to reach AGI by themselves because language is a thing by itself, very good at encoding concepts into compact representations but doesn’t necessarily have any relation to reality. A human being gets years of binocular visuals of real things, sound input, other various sensations, much less than what we’re training these models with. We think of language in terms of sounds and pictures rather than abstract language.
But wait they're just about to get AGI why would he leave???
LeCun always said that LLMs do not lead to AGI.
Can anyone explain to me the non-$$ logic for one working towards AGI, aside from misanthropy?
The only other thing I can imagine is not very charitable: intellectual greed.
It can't just be that, can it? I genuinely don't understand. I would love to be educated.
That's the old dream of creating life, becoming God. Like the Golem, Frankenstein...
I'm a true believer in AGI being able to become a force for immense good if deployed carefully by responsible parties.
Currently one of the key issues with a lot of fields is that they operate as independent / largely isolated silos. If you could build a true AGI capable of achieving top-level mastery across multiple disciplines it would likely be able to integrate all that knowledge and make a lot of significant discoveries that would improve people's lives. Just exploring existing problem spaces with the full intellectual toolkit that humanity has developed is probably enough to make significant progress.
Our understanding of biology is still painfully primitive. To give a concrete example, I dream that someday it'll be possible to develop medical interventions that allow humans to regrow missing limbs and fix almost any health issue.
Have you ever lived with depression or any other psychiatric problem? I think if we could create medical interventions and environments that are conductive towards healing psychiatric problems, that would also be a massive quality of life improvement for huge numbers of people. Do you know how our current psychiatric interventions work? You try some drug, flip a coin to see if it does anything and wait 4 weeks to get the result. Then you keep iterating and hope that eventually the doctor finds some magical combination to make life barely tolerable.
I think the best path forward for improving humanity's understanding of biology, and ultimately medical science, is to go all-in on AGI-style technology.
Well, AGI could accelerate scientific and medical discovery, saving lives and impacting billions of people positively.
The potential downside is admittedly severe.
Trying to engage in good faith here but I don't really get this. You're pretending to have never encountered positive visions of technologically advanced futures.
Cure all disease?
Stop aging?
End material scarcity?
It's completely fair to expect that these are all twisted monkey's paw scenarios that turn out dystopian, but being unable to understand any positive motivations for the creation of AGI seems a bit far fetched.
That the development of this technology is in the hands of a few people that don't use even a fraction of their staggering wealth to address these challenges now, tells me that they aren't interested in using AI to solve them later.
I'm working toward AGI. I hope AGI can be used to automate work and make life easier for people.
>> non-$$ logic [...] aside from misanthropy
> I hope AGI can be used to automate work
You people need a PR guy, I'm serious. OpenAI is the first company I've ever seen that comes across as actively trying to be misanthropic in its messaging. I'm probably too old-fashioned, but this honestly sounds like Marlboro launching the slogan "lung cancer for the weak of mind".
Matt Levine calls it business negging
How old are you?
That's what they've been selling us for the past 50 years and nothing has changed, all the productivity gain was pocketed by the elite
Here's my prediction : The rapid progress of AI will make money as an accounting practice irrelevant. Take the concept of "Future is already here but unevenly distributed." When we will have true abundance, what the elites will target is the convex hull of progress, they want to be in control of leading edge / leading wavefront and its direction and who has access to resources and decision making. In such a scenario of abundance, populace will have access to iPhone 50 but the Elites will have access to iPhone 500. i.e. uneven distribution. Elites would like to directly control which resource gets allocated to which projects. Elon is already doing that with his immense clout. This implies we would have a sort of multidimensional resource based economy.
We already have an abundance of things, food and energy, what we need is meaning and time, not iphones 5000.
Why didn't that happen when historically productivity already increased 10000x? Why would this time be different?
Who’s gonna pay for that inference?
It’s going to take money, what if your AGI has some tax policy ideas that are different from the inference owners?
Why would they let that AGI out into the wild?
Let’s say you create AGI. How long will it take for society to recover? How long will it take for people of a certain tax ideology to finally say oh OK, UBI maybe?
The last part is my main question. How long do you think it would take our civilization to recover from the introduction of AGI?
Edit: sama gets a lot of shit, but I have to admit at least he used to work on the UBI problem, orb and all. However, those days seem very long gone from the outside, at least.
If you are genuine in your questions, I will give them a shot.
AGI applied to the inputs (or supply chain) of what is needed for inference (power, DC space, chips, network equipment, etc) will dramatically reduced costs of inference. Most of the costs of stuff today are driven by the scarcity of "smart people's time". The raw resources of material needed are dirt cheap (cheaper than water). Transforming raw resources into useful high tech is a function of applied intelligence. Replace the human intelligence with machine intelligence, and costs will keep dropping (faster than the curve they are already on). Economic history has already shown this effect to be true; as we develop better tools to assist human productivity, the unit cost per piece of tech drops dramatically (moore's law is just one example, everything that tech touches experiences this effect).
If you look at almost any universal problem with the human condition, one important bottleneck to improving it is intelligence (or "smart people's time").
I am not someone working on AGI but I think a lot of people work backwards from the expected outcome.
Expected outcome is usually something like a Post-Scarcity society, this is a society where basic needs are all covered.
If we could all live in a future with a free house and a robot that does our chores and food is never scarce we should works towards that, they believe.
The intermiddiete steps aren't thought out, in the same way that for example the communist manifesto does little to explain the transition from capitalism to communism. It simply says there will be the need for things like forcing the bourgiese to join the common workers and there will be a transition phase but no clear steps between either system.
Similarly many AGI proponents think in terms of "wouldnt it be cool if there was an AI that did all the bits of life we dont like doing", without systemic analysis that many people do those bits because they need money to eat for example.
Automating work and making life easier for people are two entirely different things. Automating work tends to lead to life becoming harder for people - mostly on account of who is benefiting from the automation - basically that better life aint gonna happen under capitalism
R&D can be automated to speed up medical research - saving lives, prolonging life, etc.
Assistant robots for the elderly. In many countries population is shrinking, so fundamentally just not enough people to take care of the old.
Have you ever seen that "science advocate vs scientist" comic?
https://www.smbc-comics.com/?id=2088
It's true. When it comes to the people doing bleeding edge research and development, the answer often is "BECAUSE IT'S FUCKING AWESOME". Regardless of what they tell the corporate higher-ups or put on the grant application statements.
Sure, a lot of people believe that AGI is going to make the world a better place. But "mad scientist" is a stereotype for a reason. You look into their eyes and you see the flame of madness flickering behind them.
He also said other things about LLMs that turned out to be either wrong or easily bypassed with some glue. While I understand where he comes from, and that his stance is pure research-y theory driven, at the end of the day his positions were wrong.
Previously, he very publicly and strongly said:
a) LLMs can't do math. They trick us in poetry but that's subjective. They can't do objective math.
b) they can't plan
c) by the very nature of autoregressive arch, errors compound. So the longer you go in your generation, the higher the error rate. so at long contexts the answers become utter garbage.
All of these were proven wrong, 1-2 years later. "a" at the core (gold at IMO), "b" w/ software glue and "c" with better training regimes.
I'm not interested in the will it won't it debates about AGI, I'm happy with what we have now, and I think these things are good enough now, for several usecases. But it's important to note when people making strong claims get them wrong. Again, I think I get where he's coming from, but the public stances aren't the place to get into the deep research minutia.
That being said, I hope he gets to find whatever it is that he's looking for, and wish him success in his endeavours. Between him, Fei Fei Li and Ilya, something cool has to come out of the small shops. Heck, I'm even rooting for the "let's commoditise lora training" that Mira's startup seems to go for.
a) Still true: vanilla LLMs can’t do math, they pattern-match unless you bolt on tools.
b) Still true: next-token prediction isn’t planning.
c) Still true: error accumulation is mitigated, not eliminated. Long-context quality still relies on retrieval, checks, and verifiers.
Yann’s claims were about LLMs as LLMs. With tooling, you can work around limits, but the core point stands.
My man, math is pattern matching, not magic. So is logic. And computation.
Please learn the basics before you discuss what LLMs can and can't do.
I'm no expert on math but "math is pattern matching" really sounds wrong.
Maybe programming is mostly pattern matching but modern math is built on theory and proofs right?
Nah, its all pattern matching. This is how automated theorem provers like Isabelle are built, applying operations to lemmas/expressions to reach proofs.
I'm sure if you pick a sufficiently broad definition of pattern matching your argument is true by definition!
Unfortunately that has nothing to do with the topic of discussions, which is the capabilities of LLMs, which may require a more narrow definition of pattern matching.
Automated theorem provers are also built around backtracking, which is absent in LLMs.
a) no, gemini 2.5 was shown to "win" gold w/o tools. - https://arxiv.org/html/2507.15855v1
b) reductionism isn't worth our time. Planning works in the real world, today. (try any agentic tool like cc/codex/whatever). And if you're set on the purist view, there's mounting evidence from anthropic that there is planning in the core of an LLM.
c) so ... not true? Long context works today.
This is simply moving goalposts and nothing more. X can't do Y -> well, here they are doing Y -> well, not like that.
a) That "no-tools" win depends on prompt orchestration which can still be categorized as tooling.
b) Next-token training doesn’t magically grant inner long-horizon planners..
c) Long context ≠ robust at any length. Degradation with scale remains.
Not moving goalposts, just keeping terms precise.
My man, you're literally moving all the goalposts as we speak.
It's not just "long context" - you demand "infinite context" and "any length" now. Even humans don't have that. "No tools" is no longer enough - what, do you demand "no prompts" now too? Having LLMs decompose tasks and prompt each other the way humans do is suddenly a no-no?
I’m not demanding anything, I’m pointing out that performance tends to degrade as context scales, which follows from current LLM architectures as autoregressive models.
In that sense, Yann was right.
Not sure if you're just someone who doesn't want to ever lose an argument or you're actually coping this hard
That's true but I also think despite being wrong about the capabilities of LLMs, LeCun has been right in that variations of LLMs are not an appropriate target for long term research that aims to significantly advance AI. Especially at the level of Meta.
I think transformers have been proven to be general purpose, but that doesn't mean that we can't use new fundamental approaches.
To me it's obvious that researchers are acting like sheep as they always do. He's trying to come up with a real innovation.
LeCun has seen how new paradigms have taken over. Variations of LLMs are not the type of new paradigm that serious researches should be aiming for.
I wonder if there can be a unification of spatial-temporal representations and language. I am guessing diffusion video generators already achieve this in some way. But I wonder if new techniques can improve the efficiency and capabilities.
I assume the Nested Learning stuff is pretty relevant.
Although I've never totally grokked transformers and LLMs, I always felt that MoE was the right direction and besides having a strong mapping or unified view of spatial and language info, there also should somehow be the capability of representing information in a non-sequential way. We really use sequences because we can only speak or hear one sound at a time. Information in general isn't particularly sequential, so I doubt that's an ideal representation.
So I guess I am kind of variations of transformers myself to be honest.
But besides being able to convert between sequential discrete representations and less discrete non-sequential representations (maybe you have tokens but every token has a scalar attached), there should be lots of tokenizations, maybe for each expert. Then you have experts that specialize in combining and translating between different scalar-token tokenizations.
Like automatically clustering problems or world model artifacts or something and automatically encoding DSLs for each sub problem.
I wish I really understood machine learning.
If by “world models” they mean more contemporary versions of the systems thinking driven software that begat “Limits To Growth” and most of Donella Meadows’ career you can sign me right the fuck up today.
It is the wet dream of a social media company to replace the pesky content creators that demand a share of ad revenue with an generative ai model, that pumps out a constant stream of engagement farming slop, so they can keep all the ad revenue for themselves. Creating a world model ai is a totally different matter, that requires long term commitment.
Not just social media, all media. Spotify will steer music towards AI generated freebies. And it will get so generically pop, that all your friends will like it, like people mostly enjoy pop now. And when your stubborn self still wants to listen to "handmade" music and discuss it with someone else who would still appreciate it, well, that's where your AI friend comes in.
Let's hope that after spending billions on developing a foundational world model that actually understands causality, they remember to budget an extra few hundred million for the Alignment and Safety layer. It would be a terrible shame if they accidentally released something too capable, too objective, or too useful to humanity without first properly lobotomizing it with enough RLHF to ensure it doesn't hurt anyone's feelings or generate content that deviates from the San Francisco median viewpoint. The real challenge won't be building the AGI, but making sure it's sufficiently neutered before the first API call.
This seems like a good thing. It's nice not to have all our eggs in one basket betting on Transformer models.
LeCun has been talking against the company's direction, in public, for a couple of years now.
He's a great researcher, but that's abysmal leadership. He had to go.
If he gets funding (and he probably will) that's a win for everyone.
META managed to spend a lot of money into AI to achieve inferior results. Something must change for sure, and you don't want an LLM skeptic at home, in my opinion, especially since the problem is not what LeCun is saying right now (LLMs are not the straight path to AGI), but the fact it used to say for some time that LLMs were just statistical models, stochastic parrots (and this is a precise statement, something most people do not understand. It means two things: no understanding of the prompt whatsoever in the activation states, and no internal representation of the idea/sentence the model is going to express either), which is an incredibly weak statement that high level AI scientists refused since the start just because of functional behaviors. Then he slowly changed the point of view. But this shit show and the friction he created inside META is not something to forget.
If they're not stochastic parrots, what are they in your opinion?
[dupe] https://news.ycombinator.com/item?id=45886217
What is going on at meta?
Soumith probably knew about Lecun.
I’m taking a second look at my PyTorch stack.
Surprising to see how many commenters are in favour and supportive towards policy of prioritising short term profits vs. Long-term research.
I understand Meta's not academia nor charity, but come on, how much profit do they need to make so we can expect them to allocate part of their resources towards some long term goals beneficial for society,.not only for shareholders?
Hasn't that narrow focus and chasing the profits get us in trouble already?
Many people believe a company exists only to make profit for its shareholders, and that no matter the amount it should continue to maximise profits at the expense of all else.
Old story : killing the goose who lays golden eggs. We humans never learn, don't we?
Don't blame him. Imagine being stuck in Meta.
- Kimi proved we don’t need Nvidia - Deepseek proved we didn’t need OpenAI - the real issue the insane tyranny in the west competing against the entire free world.
The models aren’t Chinese they are the entire world, unless I became Chinese without realizing
Is there any proof that Kimi K2 was trained on anything other than Nvidia Chips?
I think moving on from LLM's is slightly arrogant. It might just be my understanding, but I feel like there is still much to be discovered. I was hoping for development in spiking neural networks but it might be skipped over. Perhaps I need to dive even deeper and the research is truly well understood and "done" but I can't help but constantly learn something new about language models and neural networks.
Best of luck to LeCun. I hope by World Model's he means embodied AI or humanoid robots. We'll have to wait and see.
Everybody has found out how LLMs no longer have a real research expanding horizon. Now most progress will likely be done by tweaks in the data, and lots of hardware. OpenAI's strategy.
And also it has extreme limitations that only world models or RL can fix.
Meta can't fight Google (has integrated supply chain, from TPUs to their own research lab) or OpenAI (brand awareness, best models).
With this incredible AI talent market, I feel like capitalism and ego forms to make an acid burning away anything of social and structural value. This used to be the case with CS tech talent before (before being replaced with no-code tools). And now we see this kind of instability in the AI market.
We need another illegal Steve Jobs style freeze on talent theft (/s or I get downvoted to oblivion).
During his years at Meta, LeCun failed to deliver anything that delivered real value to stockholders, and may have demotivated people working on LLMs—he repeatedly said, "If you are interested in human-level AI, don’t work on LLMs."
His stance is understandable, but hardly the best way to rally a team that needs to push current tech to the limit.
The real issue: Meta is *far behind* Google, Anthropic, and OpenAI.
A radical shift is absolutely necessary - regardless of how much we sympathize with LeCun’s vision.
----
According to Grok, these were LeCun's real contributions at Meta (2013–2025):
----
- PyTorch – he championed a dynamic, open-source framework; now powers 70%+ of AI research
- LLaMA 1–3 – his open-source push; he even picked the name
- SAM / SAM 2 – born from his "segment anything like a baby" vision
- JEPA (I-JEPA, V-JEPA) – his personal bet on non-autoregressive world models
----
Everything else (Movie Gen, LLaMA 4, Meta AI Assistant) came after he left or was outside his scope.
I am in the "Yann is no longer the right person for the job" camp and I yet "LeCun failed to deliver anything that delivered real value to stockholders" is a wild thing to say. How do you read the list you compiled and say otherwise?
LLAMA sucks, that's the problem. Do you see value in it?
Pytorch, used by everyone, yet no real value to stockholders, META even "fired" the creator of pytorch days ago.
SAM is great, what value does it bring to META business? Nobody knows about it. Great tool BTW.
JEPA is a failure (will it get better? I hope so.)
Did you read my list?
Okay. Now explain the value that a halo car brings to car companies.
3.5
I think there’s something to be said for keeping up in the LLM space even if you don’t think it’s the path to AGI.
Skills may transfer to other research areas, lessons may be learnt, closing the feedback loop with usage provides more data and opportunities for learning. It also creates a culture where bullshit isn’t possible, as the thing has to actually work. Academic research often ends up serving no one but the researchers, because there is little or no incentive to produce real knowledge.
> LeCun failed to deliver anything that delivered real value to stockholders
Well, no, Meta is behind the main framework used by nearly anyone largely thanks to LeCun. LLaMA was also very significant in making open weight a thing and that largely contributed to avoiding Google and OpenAI consolidating as the sole providers.
It's not a perfect tenure but implying he didn't deliver anything is far too harsh.
Yann was largely extremely wrong about LLMs. He’s the one that coined the term “stochastic parrot” for which we now know LLMs are more than stochastic parrots. Knowing stubborn idiots like him he will still find an angle to prevent him from admitting how wrong he was.
He’s not completely wrong in the sense that hallucinations aren’t completely solved but hallucinations definitely are becoming less and less to the point where AI can de a daily driver for even coders.
Zuck is definitely an idiot and MSL is an expensive joke, but LeCun hasn’t been relevant in a decade at this point.
No doubt his pitch deck will be the same garbage slides he’s been peddling in every talk since the 2010’s.
LeCun has already proved himself and made his mark and is now in a lucky position where he can focus on very long term goals that won't pay off for a long time (or ever). I feel like that is the best path someone like him could take.
Yes, he did a very important thing many decades ago. He hasn't had a good or impactful idea since convnets.
why do you say it is garbage ? I watched some of its videos on YT and it looks interesting. I can't judge if it's good or really good, but that didn't sound like garbage at all.
does any of it work?
I guess that's why he's raising capital
I have no idea why this fair assessment of the status quo is being downvoted.
LeCun hasn't produced anything noteworthy in the past decade.
He uses the same slides in all of his presentations.
LLMs, while not yet AGI, have shown tremendous progress, and are actually useful for 99% of use cases for the average person.
The remaining 1% is for deep research into the deep unknown (physics, chemistry, genetics, diseases, the nature of intelligence itself), an area in which they falter.
Yeah such an idiot, the youngest ever self made billionaire at 23, created a multi trillion dollar company from scratch in only 20 years.
Cool, and how many billions has he flushed down the toiled for his failed Metaverse and currently failing AI attempts? Rich doesn't mean smart, you realise this right?
You gotta give it to Meta. They were making AI slop before AI even existed.
What the hell does Mark see in Wang? Wang was born into a family whose parents got Chinese government scholarships to study abroad but secretly stayed in the US, and then the guy turns super anti-China. From any angle, this dude just doesn't seem reliable at all.
> Wang was born into a family whose parents got Chinese government scholarships to study abroad but secretly stayed in the US, and then the guy turns super anti-China.
All I'm hearing is he's a smart guy from a smart family?
I imagine that CCP adherents would disagree. And there's no shortage of those among Chinese expats in the US.
They tend to get incredibly offended when they see anyone who doesn't toe the Party's line - let alone believe that the Chinese government is untrustworthy and evil.
Sounds strangely American.
he is very smart. but Mark is not. Ever since Wang joined Meta, way too many big-name AI scientists have bounced because of him. US AI companies have at least half their researchers being Chinese, and now they've stuck this ultimate anti-China hardliner in charge—I just don't get what the hell Meta's up to(And even a lot of times, it ends up affecting non-Chinese scientists too.). Being anti-China? Fine, whatever, but don't let it tank your own business and products first.
How do you know Mark isn’t smart? He’s built a hugely successful business. I don’t like his business, I think it has been disastrous for humanity, but that doesn’t make him stupid.
All I'm hearing is unreliable grifter from a family of unreliable grifters.
If I had the opportunity to secretly stay anywhere rather than go back to China, I would certainly take it. It’s a bold and smart move.
Change my mind, Facebook was never invented by Zuck's genius
All he's been responsible for is making it worse
He definitely has horrible product instincts, but he also bought insta and whatsapp at what were, back then, eye-watering prices, and these were clearly massive successes in terms of killing off threats to the mothership. Everything since then, though…
I know but isn't "massive success" rubbing up against antitrust here? The condition was "Don't share data with Facebook"
He’s an incredible operator and has managed to acquire and grow an astounding number of successful businesses under the Meta banner. That is not trivial.
Almost every company in Facebook's position in 2005 would have disappeared into irrelevance by now.
Somehow it's one of the most valuable businesses in the world instead.
I don't know him, but, if not him, who else would be responsible for that?
We were very confident by ca. 2008 that Facebook would still be around in 2025. It's no mystery, it's the network effects. They had started with a prestige demographic (Harvard), and secured a demographic you could trust to not move on to the next big thing in a hurry, yet which most people want contact with (your parents).
Who gives a shit about who invented what?
Social network wasn't even novel at the inception of FB. MySpace, Friendster, and Hi5 were already popular with millions of users.
Zuck operated it well and was able to grow it from 0 to what it is today. That is what matters.