Looks like a TOS violation to me to scrape google directly like that. While the concept of giving a text only model 'pseudo vision' is clever, I think the solution in its current form is a bit fragile. The SerpAPI, Google Custom Search API, etc. exist for a reason; For anything beyond personal tinkering, this is a liability.
> Looks like a TOS violation to me to scrape google directly like that
If something was built by violating TOS' and you use that to do more TOS violations against the ones who initially did the TOS violations to build the thing, do they cancel out each other?
Not about GPT-OSS specifically, but say you used Gemma for the same purpose instead for this hypothetical.
Have you tried GPT-OSS-120b MXFP4 with reasoning effort set to high? Out of all models I can run within 96GB, it seems to consistently give better results. What exact llama model (+ quant I suppose) is it that you've had better results against, and what did you compare it against, the 120b or 20b variant?
How are you running this? I've had issues with Opencode formulating bad messages when the model runs on llama.cpp. Jinja threw a bunch of errors and GPT-OSS couldn't make tool calls. There's an issue for this on Opencode's repo but seems like it's been waiting or a couple of weeks.
> What exact llama model (+ quant I suppose) is it that you've had better results against
Not llama, but Qwen3-coder-next is on top of my list right now. Q8_K_XL. It's incredible (not just for coding).
> GPT-OSS-120B, a text-only model with zero vision support, correctly identified an NVIDIA DGX Spark and a SanDisk USB drive from a desk photo.
But wasn't it Google Lens that actually identified them?
Looks like a TOS violation to me to scrape google directly like that. While the concept of giving a text only model 'pseudo vision' is clever, I think the solution in its current form is a bit fragile. The SerpAPI, Google Custom Search API, etc. exist for a reason; For anything beyond personal tinkering, this is a liability.
> Looks like a TOS violation to me to scrape google directly like that
If something was built by violating TOS' and you use that to do more TOS violations against the ones who initially did the TOS violations to build the thing, do they cancel out each other?
Not about GPT-OSS specifically, but say you used Gemma for the same purpose instead for this hypothetical.
Isn’t SerpAPI about scraping Google through residential proxies as a service ?
And they are getting sued. https://blog.google/innovation-and-ai/technology/safety-secu...
105 comments from a few months ago.
https://news.ycombinator.com/item?id=46329109
All ia models are built on stolen data, that's fair war.
you eventually get hit with captcha with the playwright approach
have you tried Llama? In my experience it has been strictly better than GPT OSS, but it might depend on specifically how it is used.
Have you tried GPT-OSS-120b MXFP4 with reasoning effort set to high? Out of all models I can run within 96GB, it seems to consistently give better results. What exact llama model (+ quant I suppose) is it that you've had better results against, and what did you compare it against, the 120b or 20b variant?
How are you running this? I've had issues with Opencode formulating bad messages when the model runs on llama.cpp. Jinja threw a bunch of errors and GPT-OSS couldn't make tool calls. There's an issue for this on Opencode's repo but seems like it's been waiting or a couple of weeks.
> What exact llama model (+ quant I suppose) is it that you've had better results against
Not llama, but Qwen3-coder-next is on top of my list right now. Q8_K_XL. It's incredible (not just for coding).