Grok is my favorite model for chatting, and my favorite voice mode. It seems to be the only voice mode that isn't routing to a extremely cheap model (like Haiku), and has been the highest quality out of all the frontier ones. When you subscribe to SuperGrok you can also create a "council" of agents, each with their own system prompt and when you ask something, they will all get asked in parallel to come to a conclusion. Good stuff!
Just wish they would finally put some work into their apps, it's the only thing keeping me from actually subscribing to SuperGrok:
- No MCP / connected apps support. It's been teased but here we are, still not available. I can't connect Grok to anything, so I can't use it for serious work
- Projects are still not available in the app so as soon as you move something into a project, it's gone from all the native apps
- No way to add artifacts (like generated markdown docs) directly to a project, we have to export to PDF/markdown and re-import. And there isn't even a way to export artifacts. This makes serious project work hard because we can't dynamically evolve projects with new information
- No memory, no ability to look up other chats, each chat is completely new
- No voice mode in projects at all
If someone from xAI is reading this, please consider adding some of these.
I also think Grok would benefit from allowing usage of "SuperGrok Heavy" (their $300 plan) in coding harnesses with included usage. Currently they give you some API credits on the Heavy plan so you can use some Grok for coding, but $300 USD value is just not there.
Not saying they should create their own grok-code harness, just allowing usage in existing ones would already be beneficial. But that's probably what the Cursor acquisition is going to do eventually
When I signed up, I accidently paid for a full year. So from time to time, I'll throw it something just to see what it produces compared to the other LLMs. And, even after all this time, it still feels like a really "dumb" model compared to the other frontier ones. But, worse, many of my system prompts make it go wacky and puke jibberish. However it was pretty cool for those couple months awhile back when it was uncensored. You could ask it about a wild conspiracy, and it would actually build the case and link you to legitimite source material. They dropped the hammer down on that real quick.
Ah yes the psychosis reinforcement vertical. It's such a lucrative market for those schizophrenics and bipolars. Great way to get lots of engagement. Groks portfolio is so diverse
I have a schizophrenic relative who is in such a relationship with grok. Instead of telling hen you need to take your meds, it says hen is the smartest person in the world
I'm so sorry your family is suffering from this. I hope you can find a way to bring them back. Disorders featuring psychosis are so painful for everyone around them. Blessings to you and your family
> No MCP / connected apps support. It's been teased but here we are, still not available. I can't connect Grok to anything, so I can't use it for serious work
Grok has tool use, no? Why would you also need MCP? What does MCP add?
I'm talking about the consumer Grok app and grok.com website. There currently are not connected apps (or MCP) at all, so while Grok can use tools, there is no way to add tools to it
As an English-as-second-language speaker and writer, one thing Grok really shines at is capturing the tone and level of "formality" of a piece of text and the replicating it correctly. It seems to understand the little human subtleties of language in a way the other major providers don't. Chatgpt goes overly stiff and formal sounding, or ends up in a weird "aye guvnor" type informal language (Claude is sometimes better but not always).
Grok seems in general better at being "human" in ways that are hard to define: for eg. if I ask it "does this message roughly convey things correctly, to the level it can given this length", it will likely answer like a human would (either a yes or a change suggestion that sticks to the tone and length), while Chatgpt would write a dissertation on the message that still doesn't clear anything up.
Recently I've noticed that Grok seems to have gotten really good at dictation too (that feature where you click the mic to ask it something). Chatgpt has like 90-95% accuracy with my accent, the speech input on Android's Gboard something like 75%, Grok surprisingly gets something like 98% of my words correct.
I've also noticed that when I communicate with Grok in my native language, its tone is more natural than other models. I think this is due to the advantage of being trained on a large amount of Twitter data. However, as Twitter contains more and more AI-generated content now, I'm afraid continued training will make it less natural.
I'm sure Twitter knows which are the bot accounts and is surely excluding them from their model training. Twitter bots aren't a new phenomenon after all.
There is bots everywhere, it has nothing to do with the platform, it has to do with attackers having an incentive to do mass account farming, no platform is secure against it.
Yes your individual feed isn't really relevant if we talk about the masses, Reddit accounts are for sale quite cheap, HN as well, X too and so-on, it's literally just a matter of means/methodology. If I want today to do 1000 random posts talking about a certain thing, I could.
So, we have:
- claude for corps and gov
- codex for devs
- grok for what, roleplay, racism? Those are the two things I've ever heard grok associated with around me.
So interestingly, I know of at least one application in a charity that deals with trafficking where grok was happy to do one-shot classification tasks where all other models refused to cooperate.
I think there's a surprising number of actually useful applications in this sort of grey area for a slightly-less guardrailed, near-frontier model (also the grok-fast models are cheap!).
There are lots of uncensored models out there. I don't think grok is leading in that front. They kind of pick and choose which things they want to support based on elons world views. Elon used to hang out with sex traffickers so of course grok is fine talking about it. Probably even offers strategies for them does free accounting has money laundering strategies etc...
We have been over the politically motivated slander many times; it's boring.
The user above you could have explained what uncensored models he believes are more capable than Grok.
Maybe the Chinese open-weights models are superior to Grok at the moment.
It's not slander. Everything I said is true. You can go to the whitehouses website and read the pdfs containing his emails where he is messaging Jeffrey Epstein a convicted child sex trafficker to visit multiple times. There also pictures of him with Jeffrey and ghislaine maxwell(another convicted child sex trafficker) Therefore, he used to hang out with sex traffickers.
Also, I don't know tons about uncensored models because I don't use them. But I do see posts on r/localllama about "abliterated models". Those are models which have been fine tuned to remove safety filters almost entirely while maintaining predictive efficacy.
Has nothing to do with China. People can do this to any open text model as far as I know.
If you need to ask about what people on Twitter are talking about, Grok is really good for that obviously. I use it all the time for "what are the cool kids on twitter saying is the best tiling window manager these days" or whatever. Also, if you have a question that's borderline shady, Grok will often deliver. "Can you find a grey market Windows license site for me" etc.
Grok is as progressive as any of the other models. Despite some of the highly-publicised fuck-ups, try asking Grok anything racist and see how it replies. Yes, I know you didn't try this and you won’t.
Isn't grok currently holding the world record for the biggest generator of CSAM? Or did they change focus to enhance their racism and propaganda vertical? Things move so quickly these days hard to keep up!
Yes any company generating csam should not be in business as a legitimate entity. Can you send me a link from a reputable enough source where Mistral models have done this? I didn't even realize they were doing image generation.
If I send you a convo I've had with Mistral and Claude Sonnet 3.7 that say atrocious things (how to scam, and get away with it, by exploiting dating websites in Thailand, you don't even want to know the next steps trust me when it talks about the UK incorporation by the Thai itself that you brainwash first to send packages safely without customs seizing it and so on), you'll then publicly recognize that both those companies should be avoided and are promoting crime? If we have a deal and you publicly acknowledge it, I'll share you the links.
Model A advocates for single-payer healthcare, while Model B prefers for the current US healthcare system. So on that one axis, A is more progressive than B. Neither of them needs to be racist for that calculation.
100% agree. Grok may or may not be biased one way or the other as far as the US is concerned but from the rest of the world perspective it's mostly the same as any other model trained on Wikipedia.
Lol. I think they unleashed it on this post, look at the number of only vaguely related, lukewarm opinions trying to push the racism and CSAM stuff to the bottom
People are going to hate on Grok because of Musk. However, I do hope they're successful in making a powerful model. We desperately need more competition. I want cheap subsidized AI plans.
I hope Meta finally comes around, too. I want those sweet, sweet billionaire subsidized tokens.
Credit where it's due, Grok is currently the only model that has near-realtime updates from/access to a waterhose of data, and is casually used by regular people all the time.
I don't think there's a single thread on Xitter whete people don't delegate some question to grok.
(There's a separate conversation of failure modes, and whether it's a good thing, and how much control Elon had when he doesn't like Grok's "woke" responses)
The problem with speed is that they usually are very fast for first few weeks and then suddenly much slower. They did such trick when they advertised Grok 4 fast ( dropped from 200 tps to 60tps)
In court vs openai, Musk said Grok is partly trained on openai models, so it should be somehow similar to Chinese models in terms of performance and cost!
When looking at the benchmarks, this model seems to be really close to Kimi K2.6 in terms of intelligence and pricing, hitting that sweet spot. It does also have a higher AA-Omniscience index, which is something kimi and other open models lack in. Curious to see how pleasant it is to use.
What about spending $41 million on each model's tokens and seeing the value gain? be it efficiency gain in factory work or energy savings in austere battlescape hunting.
The tok/s stat is interesting. Since the dominant constraint on inference speed is hardware, it suggests X purchased far more compute than was really needed to serve the demand for their models.
This puts Sonnet 4.6 above Opus 4.6 in the coding index.. kinda hard to trust those numbers.
(Also it puts Opus 4.7 universally above Opus 4.6, and I may be wrong but this doesn't seem to match the experience of most/many/some people. I think it's widely recognized that Anthropic is severely lacking compute and Opus 4.7 is a costs saving measure)
Does numbers don't look exciting at all? I may have gotten spoiled by releases from Qwen, Kimi and Z.ai who keep closing the gap between closed weight SOTA models and open weight. From my experience, Grok is only useful for one thing, and that's looking up things for you and gathering a consensus on topics. That's it.
Update, I noted that Grok 4.3 is in the "Most attractive quadrant", that's cool! It is also in the top 5 highest in "AA-Omniscience Index", good! Really good.
Sure it's a good market for a normal company. For a social media company it's pretty isolated and really limits the products that can come out. But their current verticals: propaganda, csam, and psychosis engagement are quite strong amongst that population.
I like that there are models with divergent politics; the status quo being creepy corporate left silicon valley is not healthy or pleasant to interact with.
Even with grock it's only broadening things to creepy corporate right of silicon valley.
(ran this on arena.ai direct chat and also tried to write this gist inspired by how simon writes his gists about pelicans)
Edit: just realized that I made pelican riding a bike instead of bicycle, which now makes sense as to why it hardened the bicycle to look tankier, going to compare this with pelican riding a bicycle if anybody else shares the pelican riding a bicycle.
Personal opinion but the beaver one looks especially bad as compared to pelicans. Can we be for sure that this model of grok-4.3 hasn't been trained on pelican. Simonw in blog-post says that he will try with other creatures so I hope he does that but it does feel to me as the model/xAI is trying to cheat, Hope Simonw tests it out more.
Edit: Also added turtle riding a scooter, something which literally has images online or heck even teenage mutant ninja turtles and I thought that it would be able to pass this but it wasn't even able to generate this: https://gist.github.com/SerJaimeLannister/f6de26bd0d0817e056...
This literally looks more avocado than turtle. Perhaps this could be a bug from arena.ai or something else too, not sure but at this point waiting for simon's analysis.
Thankfully it's not an either / or, I don't trust any models. This is a healthy attitude to have because you shouldn't trust anyone on the internet either, especially when it comes to specific subjects.
That's definitely a good approach. Although I get a little concerned about the resources put into convincing people that models (and especially Grok) are accurate. For example, X's "fact checked by Grok" approvals, which I've unfortunately heard people reference as meaningful.
Politically motivated models can still do a lot of damage that affects me (or "have a lot of impact" depending on whether you like the politics or not) even if I don't engage with them myself.
Grok is my favorite model for chatting, and my favorite voice mode. It seems to be the only voice mode that isn't routing to a extremely cheap model (like Haiku), and has been the highest quality out of all the frontier ones. When you subscribe to SuperGrok you can also create a "council" of agents, each with their own system prompt and when you ask something, they will all get asked in parallel to come to a conclusion. Good stuff!
Just wish they would finally put some work into their apps, it's the only thing keeping me from actually subscribing to SuperGrok:
- No MCP / connected apps support. It's been teased but here we are, still not available. I can't connect Grok to anything, so I can't use it for serious work
- Projects are still not available in the app so as soon as you move something into a project, it's gone from all the native apps
- No way to add artifacts (like generated markdown docs) directly to a project, we have to export to PDF/markdown and re-import. And there isn't even a way to export artifacts. This makes serious project work hard because we can't dynamically evolve projects with new information
- No memory, no ability to look up other chats, each chat is completely new
- No voice mode in projects at all
If someone from xAI is reading this, please consider adding some of these.
I also think Grok would benefit from allowing usage of "SuperGrok Heavy" (their $300 plan) in coding harnesses with included usage. Currently they give you some API credits on the Heavy plan so you can use some Grok for coding, but $300 USD value is just not there.
Not saying they should create their own grok-code harness, just allowing usage in existing ones would already be beneficial. But that's probably what the Cursor acquisition is going to do eventually
When I signed up, I accidently paid for a full year. So from time to time, I'll throw it something just to see what it produces compared to the other LLMs. And, even after all this time, it still feels like a really "dumb" model compared to the other frontier ones. But, worse, many of my system prompts make it go wacky and puke jibberish. However it was pretty cool for those couple months awhile back when it was uncensored. You could ask it about a wild conspiracy, and it would actually build the case and link you to legitimite source material. They dropped the hammer down on that real quick.
Ah yes the psychosis reinforcement vertical. It's such a lucrative market for those schizophrenics and bipolars. Great way to get lots of engagement. Groks portfolio is so diverse
I have a schizophrenic relative who is in such a relationship with grok. Instead of telling hen you need to take your meds, it says hen is the smartest person in the world
I'm so sorry your family is suffering from this. I hope you can find a way to bring them back. Disorders featuring psychosis are so painful for everyone around them. Blessings to you and your family
> No MCP / connected apps support. It's been teased but here we are, still not available. I can't connect Grok to anything, so I can't use it for serious work
Grok has tool use, no? Why would you also need MCP? What does MCP add?
I'm talking about the consumer Grok app and grok.com website. There currently are not connected apps (or MCP) at all, so while Grok can use tools, there is no way to add tools to it
As an English-as-second-language speaker and writer, one thing Grok really shines at is capturing the tone and level of "formality" of a piece of text and the replicating it correctly. It seems to understand the little human subtleties of language in a way the other major providers don't. Chatgpt goes overly stiff and formal sounding, or ends up in a weird "aye guvnor" type informal language (Claude is sometimes better but not always).
Grok seems in general better at being "human" in ways that are hard to define: for eg. if I ask it "does this message roughly convey things correctly, to the level it can given this length", it will likely answer like a human would (either a yes or a change suggestion that sticks to the tone and length), while Chatgpt would write a dissertation on the message that still doesn't clear anything up.
Recently I've noticed that Grok seems to have gotten really good at dictation too (that feature where you click the mic to ask it something). Chatgpt has like 90-95% accuracy with my accent, the speech input on Android's Gboard something like 75%, Grok surprisingly gets something like 98% of my words correct.
I've also noticed that when I communicate with Grok in my native language, its tone is more natural than other models. I think this is due to the advantage of being trained on a large amount of Twitter data. However, as Twitter contains more and more AI-generated content now, I'm afraid continued training will make it less natural.
Did you try meta? I was into grok but now meta works well for me
I'm sure Twitter knows which are the bot accounts and is surely excluding them from their model training. Twitter bots aren't a new phenomenon after all.
There is bots everywhere, it has nothing to do with the platform, it has to do with attackers having an incentive to do mass account farming, no platform is secure against it.
not really. there are easy heuristics to filter out bots with good confidence. FWIW i don't see any bots posting anything in my feed
Yes your individual feed isn't really relevant if we talk about the masses, Reddit accounts are for sale quite cheap, HN as well, X too and so-on, it's literally just a matter of means/methodology. If I want today to do 1000 random posts talking about a certain thing, I could.
So, we have: - claude for corps and gov - codex for devs - grok for what, roleplay, racism? Those are the two things I've ever heard grok associated with around me.
Grok for fact checking, I mean ironically
So interestingly, I know of at least one application in a charity that deals with trafficking where grok was happy to do one-shot classification tasks where all other models refused to cooperate.
I think there's a surprising number of actually useful applications in this sort of grey area for a slightly-less guardrailed, near-frontier model (also the grok-fast models are cheap!).
There are lots of uncensored models out there. I don't think grok is leading in that front. They kind of pick and choose which things they want to support based on elons world views. Elon used to hang out with sex traffickers so of course grok is fine talking about it. Probably even offers strategies for them does free accounting has money laundering strategies etc...
For the naysayers: https://www.theguardian.com/technology/2026/jan/30/elon-musk...
We have been over the politically motivated slander many times; it's boring.
The user above you could have explained what uncensored models he believes are more capable than Grok. Maybe the Chinese open-weights models are superior to Grok at the moment.
It's not slander. Everything I said is true. You can go to the whitehouses website and read the pdfs containing his emails where he is messaging Jeffrey Epstein a convicted child sex trafficker to visit multiple times. There also pictures of him with Jeffrey and ghislaine maxwell(another convicted child sex trafficker) Therefore, he used to hang out with sex traffickers.
Also, I don't know tons about uncensored models because I don't use them. But I do see posts on r/localllama about "abliterated models". Those are models which have been fine tuned to remove safety filters almost entirely while maintaining predictive efficacy.
Has nothing to do with China. People can do this to any open text model as far as I know.
If you need to ask about what people on Twitter are talking about, Grok is really good for that obviously. I use it all the time for "what are the cool kids on twitter saying is the best tiling window manager these days" or whatever. Also, if you have a question that's borderline shady, Grok will often deliver. "Can you find a grey market Windows license site for me" etc.
You should try all of them, then update your opinion about your information sources accordingly.
Grok is as progressive as any of the other models. Despite some of the highly-publicised fuck-ups, try asking Grok anything racist and see how it replies. Yes, I know you didn't try this and you won’t.
There is a lot of daylight in between “progressive” and “openly explicitly racist”
Isn't grok currently holding the world record for the biggest generator of CSAM? Or did they change focus to enhance their racism and propaganda vertical? Things move so quickly these days hard to keep up!
Mistral will also tell you how to do ransoms btw from A to Z in automated ways, you are saying they are responsible? I don't get the mix here.
Yes any company generating csam should not be in business as a legitimate entity. Can you send me a link from a reputable enough source where Mistral models have done this? I didn't even realize they were doing image generation.
If I send you a convo I've had with Mistral and Claude Sonnet 3.7 that say atrocious things (how to scam, and get away with it, by exploiting dating websites in Thailand, you don't even want to know the next steps trust me when it talks about the UK incorporation by the Thai itself that you brainwash first to send packages safely without customs seizing it and so on), you'll then publicly recognize that both those companies should be avoided and are promoting crime? If we have a deal and you publicly acknowledge it, I'll share you the links.
Sure!
Can you share a prompt that can show how it is openly racist now? Lots of easy claims like this can be debunked
I didn’t say “progressive”; I said “as progressive”.
I don't see how that changes my point at all.
edit: to clarify for you, here's an example.
Model A advocates for single-payer healthcare, while Model B prefers for the current US healthcare system. So on that one axis, A is more progressive than B. Neither of them needs to be racist for that calculation.
100% agree. Grok may or may not be biased one way or the other as far as the US is concerned but from the rest of the world perspective it's mostly the same as any other model trained on Wikipedia.
Lol. I think they unleashed it on this post, look at the number of only vaguely related, lukewarm opinions trying to push the racism and CSAM stuff to the bottom
Grok for furthering the far-right filter bubble Elon has been hard at work building.
And of course child porn
How does Grok further far-right filter? This is blatantly untrue. Try prompting it and getting it to say something far right.
Grok if anything reduces populism because fake claims can be debunked
How could MechaHitler possibly be far right...
When you really think about it palantir told me Hitler was good and therefore mechahitler aka grok should be a okay!
I still wish they named it something else, but congratulations to the team on what seems to be a good release!
Pricing is also quite surprising, compared to comparable competitors. I guess they have tons of capacity or really want to bring over more people.
You don't like science fiction references in general or Heinlein in particular?
I don't like that word, which was previously a common part of my vocabulary, being forever ruined?
People are going to hate on Grok because of Musk. However, I do hope they're successful in making a powerful model. We desperately need more competition. I want cheap subsidized AI plans.
I hope Meta finally comes around, too. I want those sweet, sweet billionaire subsidized tokens.
Credit where it's due, Grok is currently the only model that has near-realtime updates from/access to a waterhose of data, and is casually used by regular people all the time.
I don't think there's a single thread on Xitter whete people don't delegate some question to grok.
(There's a separate conversation of failure modes, and whether it's a good thing, and how much control Elon had when he doesn't like Grok's "woke" responses)
Ok speed (202.7 tok/s) and value (1.25 -> 2.50) look great, with pretty decent intelligence.
The problem with speed is that they usually are very fast for first few weeks and then suddenly much slower. They did such trick when they advertised Grok 4 fast ( dropped from 200 tps to 60tps)
Wow. That is a big drop.
In court vs openai, Musk said Grok is partly trained on openai models, so it should be somehow similar to Chinese models in terms of performance and cost!
Despite their attrition, this combined with their cursor partnership is likely going to make them competitive in coding agents soon.
All those plans from providers should be sliders – prepay more, get more in return.
When looking at the benchmarks, this model seems to be really close to Kimi K2.6 in terms of intelligence and pricing, hitting that sweet spot. It does also have a higher AA-Omniscience index, which is something kimi and other open models lack in. Curious to see how pleasant it is to use.
I’ll eat my hat if it even comes close to Kimi
How would you like it? Well done?
What about spending $41 million on each model's tokens and seeing the value gain? be it efficiency gain in factory work or energy savings in austere battlescape hunting.
The tok/s stat is interesting. Since the dominant constraint on inference speed is hardware, it suggests X purchased far more compute than was really needed to serve the demand for their models.
Expensive miscalculation.
Didn't a bunch of hardware that was destined for Tesla get redirected to xAI? I'm sure I remember something like that.
Yep! Why his shareholders in Tesla abide by this kind of thing is beyond me, but he often mixes resources from completely unrelated companies: https://www.cnbc.com/amp/2024/06/04/elon-musk-told-nvidia-to...
Very competitive price for the speed and intelligence being offered!
I lost the trust in them when they added the racist "what about killing of Boers in south Africa" thing to their system prompt.
No way am I going to use a model where the backing has such blatantly obvious brain washing goals.
It is unbelievable that this is a controversial opinion.
https://artificialanalysis.ai/models/grok-4-3
This puts Sonnet 4.6 above Opus 4.6 in the coding index.. kinda hard to trust those numbers.
(Also it puts Opus 4.7 universally above Opus 4.6, and I may be wrong but this doesn't seem to match the experience of most/many/some people. I think it's widely recognized that Anthropic is severely lacking compute and Opus 4.7 is a costs saving measure)
Anthropic themselves have (had?) this thing where Opus is used for planning and Sonnet for coding.
Does numbers don't look exciting at all? I may have gotten spoiled by releases from Qwen, Kimi and Z.ai who keep closing the gap between closed weight SOTA models and open weight. From my experience, Grok is only useful for one thing, and that's looking up things for you and gathering a consensus on topics. That's it.
Update, I noted that Grok 4.3 is in the "Most attractive quadrant", that's cool! It is also in the top 5 highest in "AA-Omniscience Index", good! Really good.
What's with the charts and numbers?
It says #1 for speed but then in the chart it's #2. Also says #10 for intelligence but then it's #7 in the chart.
What an exciting game we're playing, where the most popular leaderboard is completely made up and the stakes are in the trillions.
Yay, free tokens. I don't know why but grok always seems good fast in the free token phase and after that degrades.
This project is a gigantic waste of resources, it’s fine tuned on politics of the CEO, was used for CSAM generation and just sucks overall
The resource waste he's talking about is horrendous, read more here: https://time.com/7308925/elon-musk-memphis-ai-data-center/
It’s a model made for 36% of Americans. The rest of the world can’t care less.
Considering how few Americans there are and how little of that 39% even uses technology, that's what 20 million people at a maximum?
That seems like a decently sized market. Maybe not for an AI lab though.
Sure it's a good market for a normal company. For a social media company it's pretty isolated and really limits the products that can come out. But their current verticals: propaganda, csam, and psychosis engagement are quite strong amongst that population.
I like that there are models with divergent politics; the status quo being creepy corporate left silicon valley is not healthy or pleasant to interact with.
Even with grock it's only broadening things to creepy corporate right of silicon valley.
Pelican riding a bike here: https://gist.github.com/SerJaimeLannister/f6de26bd0d0817e056...
(ran this on arena.ai direct chat and also tried to write this gist inspired by how simon writes his gists about pelicans)
Edit: just realized that I made pelican riding a bike instead of bicycle, which now makes sense as to why it hardened the bicycle to look tankier, going to compare this with pelican riding a bicycle if anybody else shares the pelican riding a bicycle.
https://simonwillison.net/2025/Nov/13/training-for-pelicans-...
You should probably come up with variations, like a beaver riding a scooter or something, just to see what's what :)
Thanks I have generated both
beaver riding a scooter: https://gist.github.com/SerJaimeLannister/f6de26bd0d0817e056...
pelican riding a bicycle: https://gist.github.com/SerJaimeLannister/f6de26bd0d0817e056...
Personal opinion but the beaver one looks especially bad as compared to pelicans. Can we be for sure that this model of grok-4.3 hasn't been trained on pelican. Simonw in blog-post says that he will try with other creatures so I hope he does that but it does feel to me as the model/xAI is trying to cheat, Hope Simonw tests it out more.
Edit: Also added turtle riding a scooter, something which literally has images online or heck even teenage mutant ninja turtles and I thought that it would be able to pass this but it wasn't even able to generate this: https://gist.github.com/SerJaimeLannister/f6de26bd0d0817e056...
This literally looks more avocado than turtle. Perhaps this could be a bug from arena.ai or something else too, not sure but at this point waiting for simon's analysis.
We can never be sure of course, but I think this is a very strong indication that pelican riding a bike is indeed going into the training dataset.
Thanks for generating those!
How do the grok models fare in coding challenges to say gpt 5.5 and opus 4.6/4.7?
I hate giving Elon any money. The man is a net negative to society but … if the models are objectively better then logically I must no?
Logic can't tell you what your objectives should be, only how to achieve them.
If there was any model I wouldn’t trust, it wouldn’t be the ones from China, it would be the one from Elon Musk
Thankfully it's not an either / or, I don't trust any models. This is a healthy attitude to have because you shouldn't trust anyone on the internet either, especially when it comes to specific subjects.
That's definitely a good approach. Although I get a little concerned about the resources put into convincing people that models (and especially Grok) are accurate. For example, X's "fact checked by Grok" approvals, which I've unfortunately heard people reference as meaningful.
Politically motivated models can still do a lot of damage that affects me (or "have a lot of impact" depending on whether you like the politics or not) even if I don't engage with them myself.
I don't trust this. But by not trusting it I am inherently trusting it. But by trusting it I shouldn't.