Regardless of how bullish you are on AI, I think it is fair to say there has been an incredible over investment. AI is never going to do what some investors have imagined it will do based on some favorable and in some cases misleading best case scenarios demos.
I don't think the current approach lead to Gen AI in any practical sense and I don't think LLMs are reliable enough nor will they be reliable enough to implement cross system and provide decision making authority to. e.g. "hey, "AI, book me a flight to Miami for next Wednesday." You may be able to do something like this, but it would require as many steps as if you did it through the airline website and the chance of it booking an undesirable flight are high, versus just doing it yourself. I bring this up because this is always a demo. It was a demo during the voice assistant boom / craze and it was a demo with these LLM AI models. The problem is AI works 80-90 percent of the time for simple tasks and pretty much 50-50 for most complex tasks. That gap will close a bit more, but it needs to be 99.99% reliable to be trusted and anything much short of that means that it is effectively untrustworthy to do anything important.
Many demos have been proven to be faked or cherry picked to provide a scenario where the AI would succeed under those very specific prompts but any deviations would fail. Just do a search, Google, OpenAI, and many other have faked or exaggerated features and capabilities.
I can tell you investors think from the demos, some of which have been proven to be faked, that this leads to gen AI that can do anything, completely autonomously. That it will be able to do what it can do for basic coding and writing press releases for literally everything. And it can't and it wont. And what it can do it does very expensively. Look at driver less cars. One of the first big problems we have tried to solve with LLMs and machine learning and we still can't reliably trust cars to drive themselves without doing a lot of upfront work for a specific city. Don't get me wrong where we are with driver assists and robo taxis is incredible, but the investment has been far greater than the return and may always be. And once investors understand that fully. Once they realize that the technology IS incredible, but the economics will almost never work out. They are gone. Once they are gone Open AI, Anthropic, with their multi-billion dollar burn rate quickly need to cut costs and / or find a buyer. The only buyers who can afford it and run them will be Google, Apple, Amazon, and Microsoft and they too will be looking to reduce costs and exposure when the bubble bursts, so they will focus on efficiency of models even at the cost of function and features.
Regardless of how bullish you are on AI, I think it is fair to say there has been an incredible over investment. AI is never going to do what some investors have imagined it will do based on some favorable and in some cases misleading best case scenarios demos.
I just hope that the day when we start being more reasonable about this technology will come soon...
What do you think it won’t do? Just curious which scenarios you’ve run across that seem outlandish.
I don't think the current approach lead to Gen AI in any practical sense and I don't think LLMs are reliable enough nor will they be reliable enough to implement cross system and provide decision making authority to. e.g. "hey, "AI, book me a flight to Miami for next Wednesday." You may be able to do something like this, but it would require as many steps as if you did it through the airline website and the chance of it booking an undesirable flight are high, versus just doing it yourself. I bring this up because this is always a demo. It was a demo during the voice assistant boom / craze and it was a demo with these LLM AI models. The problem is AI works 80-90 percent of the time for simple tasks and pretty much 50-50 for most complex tasks. That gap will close a bit more, but it needs to be 99.99% reliable to be trusted and anything much short of that means that it is effectively untrustworthy to do anything important.
Many demos have been proven to be faked or cherry picked to provide a scenario where the AI would succeed under those very specific prompts but any deviations would fail. Just do a search, Google, OpenAI, and many other have faked or exaggerated features and capabilities.
I can tell you investors think from the demos, some of which have been proven to be faked, that this leads to gen AI that can do anything, completely autonomously. That it will be able to do what it can do for basic coding and writing press releases for literally everything. And it can't and it wont. And what it can do it does very expensively. Look at driver less cars. One of the first big problems we have tried to solve with LLMs and machine learning and we still can't reliably trust cars to drive themselves without doing a lot of upfront work for a specific city. Don't get me wrong where we are with driver assists and robo taxis is incredible, but the investment has been far greater than the return and may always be. And once investors understand that fully. Once they realize that the technology IS incredible, but the economics will almost never work out. They are gone. Once they are gone Open AI, Anthropic, with their multi-billion dollar burn rate quickly need to cut costs and / or find a buyer. The only buyers who can afford it and run them will be Google, Apple, Amazon, and Microsoft and they too will be looking to reduce costs and exposure when the bubble bursts, so they will focus on efficiency of models even at the cost of function and features.
I think the multi-model world proposed by Le-Cunn is the way forward.
but now the world is blind due to incentives of wanting LLMs to work even if they spew bullshit 50% of the time.
once A.I either from self-driving cars, or in-house robotics becomes really prevalent then I will know we are in the A.I era.
Agreed, it's really annoying. Though this is nothing new for the human race.