I think there's a very important nugget here unrelated to agents: Kagi as a search engine is a higher signal source
of information than Google page rank and ad sense funded model. Primarily because google as it is today includes a massive amount of noise and suffered from blowback/cross-contamination as more LLM generated content pollute information truth.
> We found many, many examples of benchmark tasks where the same model using Kagi Search as a backend outperformed other search engines, simply because Kagi Search either returned the relevant Wikipedia page higher, or because the other results were not polluting the model’s context window with more irrelevant data.
> This benchmark unwittingly showed us that Kagi Search is a better backend for LLM-based search than Google/Bing because we filter out the noise that confuses other models.
> Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.
It's likely they can filter the results for their own agents, but will leave other results as they are. Half the issue with normal results are their ads - that's not going away.
>Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.
Unlikely. There are very few people willing to pay for Kagi. The HN audience is not at all representative of the overall population.
Google can have really miserable search results and people will still use it. It's not enough to be as good as google, you have to be 30% better than google and still free in order to convert users.
I use Kagi and it's one of the few services I am OK with a reoccurring charge from because I trust the brand for whatever reason. Until they find a way to make it free, though, it can't replace google.
> Primarily because google as it is today includes a massive amount of noise and suffered from blowback/cross-contamination as more LLM generated content pollute information truth.
I'm not convinced about this. If the strategy is "lets return wikipedia.org as the most relevant result", that's not sophisticated at all. Infact, it only worked for a very narrow subset of queries. If I search for 'top luggages for solo travel', I dont want to see wikipedia and I dont know how kagi will be any better.
The wrote "returned the relevant Wikipedia page higher" and not "wikipedia.org as the most relevant result" - that's an important distinction. There are many irrelevant Wikipedia pages.
Generally we do particularly better on product research queries [1] than other categories, because most poor review sites are full of trackers and other stuff we downrank.
However there aren't public benchmarks for us to brag about on product search, and frankly the simpleQA digression in this post made it long enough it was almost cut.
1. (Except hyper local search like local restaurants)
I tried a prompt that consistently gets Gemini to badly hallucinate, and it responded correctly.
Prompt: "At a recent SINAC conference (approx Sept 2025) the presenters spoke about SINAC being underresourced and in crisis, and suggested better leveraging of and coordination with NGOs. Find the minutes of the conference, and who was advocating for better NGO interaction."
The conference was actually in Oct 2024. The approx date in parens causes Gemini to create an entirely false narrative, which includes real people quoted out of context. This happens in both Gemini regular chat and Gemini Deep Research (in which the narrative gets badly out of control).
Kagi reasonably enough answers: "I cannot find the minutes of a SINAC conference from approximately September 2025, nor any specific information about presenters advocating for better NGO coordination at such an event."
As a Kagi subscriber, I find this to be mostly useful. I'd say I do about 50% standard Kagi searches, 50% Kagi assistant searches/conversations. This new ability to change the level of "research" performed can be genuinely useful in certain contexts. That said, I probably expect to use this new "research assistant" once or twice a month.
I'm a little confused about what the point of these are compared to the existing features/models that kagi already has. Are they just supposed to be a one-stop shop where I don't have to choose which model to use? When should I use kagi quick/research assistant instead of, e.g. kimi?
I tried the quick assistant a bit (don't have ultimate so I can't try research), and while the writing style seems slightly different, I don't see much difference in information compared to using existing models through the general kagi assistant interface.
Quick assistant is a managed experience, so we can add features to it in a controlled way we can't for all the models we otherwise support at once.
For now Quick assistant has a "fast path" answer for simple queries. We can't support the upgrades we want to add in there on all the models because they differ in tool calling, citation reliability, context window, ability to not hallucinate, etc.
The responding model is currently qwen3-235B from cerebras but we want to decouple the user expectations from that so we can upgrade it down the road to something else. We like Kimi, but couldn't get a stable experience for Quick on it at launch with current providers (tool calling unreliability)
I used quick research and it was pretty cool. A couple of caveats to keep in mind:
1. It answers using only the crawled sites. You can't make it crawl a new page.
2. It doesn't use a page' search function automatically.
This is expected, but doesn't hurt to take that in mind. I think i'd be pretty useful. You ask for recent papers on a site and the engine could use hackernews' search function, then kagi would crawl the page.
Kagi reminds me of the original search engines of yore, when I could type what I want and it would appear, and I could go on with my work/life.
As for the people who claim this will create/introduce slop, Kagi is one of the few platforms where they are actively fighting against low quality AI generated content with their community fueled "SlopStop" campaign.[0]
Not sponsored, just a fan. Looking forward to trying this out.
regular reminder: kagi is - above all else - a really really good search engine, and if google/etc, or even just the increasingly horrific ads-ocracy make you sad, you should definitely give it a go - the trial is here: https://kagi.com/pricing
if you like it, it's only $10/month, which I regrettably spend on coffee some days.
The fact that people applaud Kagi taking the money they gave for search to invest it in bullshit AI products and spit on Google's AI search at the same time tells you everything you need to know about HackerNews.
Search is AI now, so I don’t get what your argument is.
Since 2019 Google and Bing both use BERT style encoder-only search architecture.
I’ve been using Kagi ki (now research assistant) for months and it is a fantastic product that genuinely improves the search experience.
So overall I’m quite happy they made these investments. When you look at Google and Perplexity this is largely the direction the industry is going.
They’re building tools on other LLMs and basically running open router or something behind the scenes. They even show you your token use/cost against your allowance/budget in the billing page so you know what you’re paying for. They’re not training their own from-scratch LLMs, which I would consider a waste of money at their size/scale.
We're not running on openrouter, that would break the privacy policy.
We get specific deals with providers and use different ones for production models.
We do train smaller scale stuff like query classification models (not trained on user queries, since I don't even have access to them!) but that's expected and trivially cheap.
Do you have any evidence that the AI efforts are not being funded by the AI product, Kagi Assistant? I would expect the reverse: the high-margin AI products are likely cross-subsidizing the low-margin search products and their sliver of AI support.
We're explicitly conscious of the bullshit problem in AI and we try to focus on only building tools we find useful. See position statement on the matter yesterday:
> LLMs are bullshitters. But that doesn't mean they're not useful
> Note: This is a personal essay by Matt Ranger, Kagi’s head of ML
I appreciate the disclaimer, but never underestimate someone's inability to understand something, when their job depends on them not understanding it.
Bullshit isn't useful to me, I don't appreciate being lied to. You might find use in declaring the two different, but sufficiently advanced ignorance (or incompetence) is indistinguishable from actual malice, and thus they should be treated the same.
Your essay, while well written, doesn't do much to convince me any modern LLM has a net positive effect. If I have to duplicate all of it's research to verify none of it is bullshit, which will only be harder after using it given the anchoring and confirmation bias it will introduce... why?
And to be clear you shouldn't build the tools that YOU find useful, you should build the tools that your users, which pay for a specific product, find useful.
You could have LLMs that are actually 100% accurate in their answers that it would not matter at all to what I am raising here. People are NOT paying Kagi for bullshit AI tools, they're paying for search. If you think otherwise, prove it, make subscriptions entirely separate for both products.
Kagi founder here. We are moving to a future where these subscriptions will be separate. Even today more that 80% of our members use Kagi Assistant and our other AI-supported products so saying "people are NOT paying Kagi for bullshit AI tools" is not accurate, mostly in the sense that we are not in the business of creating bullshit tools. Life is too short for that. I also happen to like Star Trek version of the future, where smart computers we can talk to exist. I also like that Star Trek is still 90% human drama, and 10% technology quitely working in the background in service of humans - and this is the kind of future I would like to build towards and leave for my children. Having the most accurate search in the world that has users' best interest in mind is a big part of it, and that is not going anywhere.
edit: seeing the first two (negative) replies to my comment made me smile. HN is tough crowd to please :) The thing is similar to how I did paid search and went all in with my own money when everyone thought I was crazy, I did that out of own need and need for my family to have search done right and am doing the same now with AI, wanting to have it done right as a product. What you see here is the result of this group of humans that call themself Kagi best effort - not more, not less.
I found Kagi quite recently, and after blowing through my trial credits, and now almost blowing through my low tier (300) credits, I'm starting to look at the next tier up. However, it's approaching my threshold of value vs price.
I have my own payment methods for AI (OpenWebUI hosted on personal home server connected to OpenRouter API credits which costs me about $1-10 per month depnding on my usage), so seeing AI bundled with searches in the pricing for Kagi really just sucks the value out of the main reason I want to switch to Kagi.
I would love to be able to just buy credits freely (say 300 credits for $2-3) and just using them whenever. No AI stuff, no subscription, just pay for my searches. If I have a lull in my searches for a month, then a) no extra resources from Kagi have been spent, and b) my credits aren't used and rollover. Similarly, if I have a heavy search month, then I'll buy more and more credits.
I just don't want to buy extra AI on top of what I already have.
> We are moving to a future where these subscriptions will be separate. Even today more that 80% of our members use Kagi Assistant and our other AI-supported products so saying "people are NOT paying Kagi for bullshit AI tools" is not accurate, mostly in the sense that we are not in the business of creating bullshit tools.
For what it's worth, as someone who tends to be pretty skeptical of introducing AI tools into my life, this statistic doesn't really convince me much of the utility of them. I'm not sure how to differentiate this from selection bias where users who don't want to use AI tools just don't subscribe in the first place rather than this being a signal that the AI tools are worthwhile for people outside of a niche group who are already interested enough to pay for them.
This isn't as strong a claim as what the parent comment was saying; it's not saying that the users you have don't want to be paying for AI tools, but it doesn't mean that there aren't people who are actively avoiding paying for them either. I don't pretend to have any sort of insight into whether this is a large enough group to be worth prioritizing, but I don't think the statement of your perspective here is going to be particularly compelling to anyone who doesn't already agree with you.
> I also happen to like Star Trek version of the future, where smart computers we can talk to exist [...], this is the kind of future I would like to build towards
Well if that doesn't seal the deal in making it clear that Kagi is not about search anymore, I don't know what does. Sad day for Kagi search users, wow!
> Having the most accurate search in the world that has users' best interest in mind is a big part of it
It's not, you're just trying to convince yourself it is.
I can't really do anything with the recommendation you're making.
The recommendation you made worked from your personal preference as an axiom.
The fact is that the APIs in search cost vastly more than the LLMs used in quick answer / quick assistant.
If you use the expensive AI stuff (research assistant or the big tier 1 models) that's expensive. But also: it is in a separate subscription, the $25/month one.
We used not to give any access to the assistant at the $5 and $10 tier, now we do, it's a free upgrades for users.
What they saying in this post is that they are designing these LLM-based features to support search.
The post describes how their use-case is finding high quality sources relevant to a query and providing summaries with references/links to the user (not generating long-form "research reports")
FWIW, this aligns with what I've found ChatGPT useful for: a better Google, rather than a robotic writer.
I have a no-AI mode that filters out the bad results too. The problem is that it doesn't return any results at all, as it doesn't help with the harder problem of filtering out only the bad results without the good ones though. So far it's not clear to me that LLMs have significantly moved the needle on the ability to differentiate this.
If you look at my post history, I’m the last person to defend LLMs. That being said, I think LLMs are the next evolution in search. Not what OpenAI and Anthropic and xAI are working on - I think all the major models are moving further and further away from that with the “AI” stuff. But the core technology is an amazing way to search.
So I actually find it the perfect thing for Kagi to work with. If they can leverage LLMs to improve search, without getting distracted by the “AI” stuff, there’s tons of potential value,
Not saying that’s what this is… but if there’s any company I’d want playing with LLMs it’s probably Kagi
A better search would be rich metadata and powerful filter tools, not result summarizer. When I search, I want to find stuff, I don’t want an interpretation of what was found.
This is building on top of the existing core product, so the output is directly tied to the quality of their core search results being fed into the assistants. I overall really enjoy all of their A.I products, using their prompt assistant frequently for quick research tasks.
It does miss occasionally, or I feel like "that was a waste of tokens" due to a bad response or something, but overall I like supporting Kagi's current mission in the market of AI tools.
Same, though in fairness as long as they don't force it on me (the way Google does) and as long as the real search results don't suffer because of a lack of love (which so far they haven't), then it's no skin off my back. I think LLMs are an abysmal tool for finding information, but as long as the actual search feature is working well then I don't care if an LLM option exists.
Is there anyone selling LLM tools that would claim they aren't keeping their deficiencies in mind or admit that they're ignoring user choices? I'm not saying you are or aren't wasting money on slop, because I have no way of knowing, but it's hard to imagine someone who is concerned about a company acting in bad finding this compelling.
Kagi is already expensive for a search engine. Now I know part of my subscription is going towards funding AI bullshit. And I know the cost of that AI bullshit will get jacked up in price and force Kagi sub price up as well. I'm so tired of AI being forced into everything.
These are only available on the Ultimate tier. If (like me) you don't care about the LLMs then there is no reason to be on the Ultimate tier so you don't pay for it.
Not for nothing, but I wish there was an anonymized ai built into a kagi that was able to have normal conversation discussion about sexual topics or search for pornographic topics like a safe search off function.
I understand the safety needs around things LLM should not build nuclear weapons, but it would be nice to have a frontier model that could write or find porn.
I think there's a very important nugget here unrelated to agents: Kagi as a search engine is a higher signal source of information than Google page rank and ad sense funded model. Primarily because google as it is today includes a massive amount of noise and suffered from blowback/cross-contamination as more LLM generated content pollute information truth.
> We found many, many examples of benchmark tasks where the same model using Kagi Search as a backend outperformed other search engines, simply because Kagi Search either returned the relevant Wikipedia page higher, or because the other results were not polluting the model’s context window with more irrelevant data.
> This benchmark unwittingly showed us that Kagi Search is a better backend for LLM-based search than Google/Bing because we filter out the noise that confuses other models.
Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.
Hey Google, Pinterest results are probably messing with AI crawlers pretty badly. I bet it would really help the AI if that site was deranked :)
Also if this really is the case, I wonder what an AI using Marginalia for reference would be like.
> Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.
It's likely they can filter the results for their own agents, but will leave other results as they are. Half the issue with normal results are their ads - that's not going away.
There are several startups providing web search solely for ai agents. Not sure any agent uses Google for this.
>Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.
Unlikely. There are very few people willing to pay for Kagi. The HN audience is not at all representative of the overall population.
Google can have really miserable search results and people will still use it. It's not enough to be as good as google, you have to be 30% better than google and still free in order to convert users.
I use Kagi and it's one of the few services I am OK with a reoccurring charge from because I trust the brand for whatever reason. Until they find a way to make it free, though, it can't replace google.
> Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.
They spent the last decade and a half encouraging the proliferation of garbage via "SEO". I don't see this reversing.
> Primarily because google as it is today includes a massive amount of noise and suffered from blowback/cross-contamination as more LLM generated content pollute information truth.
I'm not convinced about this. If the strategy is "lets return wikipedia.org as the most relevant result", that's not sophisticated at all. Infact, it only worked for a very narrow subset of queries. If I search for 'top luggages for solo travel', I dont want to see wikipedia and I dont know how kagi will be any better.
The wrote "returned the relevant Wikipedia page higher" and not "wikipedia.org as the most relevant result" - that's an important distinction. There are many irrelevant Wikipedia pages.
(Kagi staff here)
Generally we do particularly better on product research queries [1] than other categories, because most poor review sites are full of trackers and other stuff we downrank.
However there aren't public benchmarks for us to brag about on product search, and frankly the simpleQA digression in this post made it long enough it was almost cut.
1. (Except hyper local search like local restaurants)
do you use pinned/deranked sites as an indicator for quality?
I don't think we share them across accounts, no, but we do use your personal kagi search config in assistant searches.
I tried a prompt that consistently gets Gemini to badly hallucinate, and it responded correctly.
Prompt: "At a recent SINAC conference (approx Sept 2025) the presenters spoke about SINAC being underresourced and in crisis, and suggested better leveraging of and coordination with NGOs. Find the minutes of the conference, and who was advocating for better NGO interaction."
The conference was actually in Oct 2024. The approx date in parens causes Gemini to create an entirely false narrative, which includes real people quoted out of context. This happens in both Gemini regular chat and Gemini Deep Research (in which the narrative gets badly out of control).
Kagi reasonably enough answers: "I cannot find the minutes of a SINAC conference from approximately September 2025, nor any specific information about presenters advocating for better NGO coordination at such an event."
Ah yes we have some benchmarks on this sort of misguided prompt trap, so it should perform well on this
As a Kagi subscriber, I find this to be mostly useful. I'd say I do about 50% standard Kagi searches, 50% Kagi assistant searches/conversations. This new ability to change the level of "research" performed can be genuinely useful in certain contexts. That said, I probably expect to use this new "research assistant" once or twice a month.
I'd say the most useful part for me is appending ? / !quick / !research directly from the browser search bar to a query
I'm a little confused about what the point of these are compared to the existing features/models that kagi already has. Are they just supposed to be a one-stop shop where I don't have to choose which model to use? When should I use kagi quick/research assistant instead of, e.g. kimi?
I tried the quick assistant a bit (don't have ultimate so I can't try research), and while the writing style seems slightly different, I don't see much difference in information compared to using existing models through the general kagi assistant interface.
Quick assistant is a managed experience, so we can add features to it in a controlled way we can't for all the models we otherwise support at once.
For now Quick assistant has a "fast path" answer for simple queries. We can't support the upgrades we want to add in there on all the models because they differ in tool calling, citation reliability, context window, ability to not hallucinate, etc.
The responding model is currently qwen3-235B from cerebras but we want to decouple the user expectations from that so we can upgrade it down the road to something else. We like Kimi, but couldn't get a stable experience for Quick on it at launch with current providers (tool calling unreliability)
I used quick research and it was pretty cool. A couple of caveats to keep in mind:
1. It answers using only the crawled sites. You can't make it crawl a new page. 2. It doesn't use a page' search function automatically.
This is expected, but doesn't hurt to take that in mind. I think i'd be pretty useful. You ask for recent papers on a site and the engine could use hackernews' search function, then kagi would crawl the page.
I'm seeing a lot of investment in these things that have a short shelf life.
Agents/assistants but nothing more.
We're building tools that we find useful, and we hope others find it too. See notes on our view of LLMs and their flaws:
https://blog.kagi.com/llms
Why do you think the shelf life is short?
Kagi reminds me of the original search engines of yore, when I could type what I want and it would appear, and I could go on with my work/life.
As for the people who claim this will create/introduce slop, Kagi is one of the few platforms where they are actively fighting against low quality AI generated content with their community fueled "SlopStop" campaign.[0]
Not sponsored, just a fan. Looking forward to trying this out.
[0] https://help.kagi.com/kagi/features/slopstop.html
regular reminder: kagi is - above all else - a really really good search engine, and if google/etc, or even just the increasingly horrific ads-ocracy make you sad, you should definitely give it a go - the trial is here: https://kagi.com/pricing
if you like it, it's only $10/month, which I regrettably spend on coffee some days.
I now that the price haven’t changed for a while, but I would pay for unlimited search and no AI.
> above all else
What they've been building for the past couple of years makes it blindingly clear that they are definitely not a search engine *above all else*.
Don't believe me? Check their CEO's goal: https://news.ycombinator.com/item?id=45998846
The fact that people applaud Kagi taking the money they gave for search to invest it in bullshit AI products and spit on Google's AI search at the same time tells you everything you need to know about HackerNews.
Search is AI now, so I don’t get what your argument is.
Since 2019 Google and Bing both use BERT style encoder-only search architecture.
I’ve been using Kagi ki (now research assistant) for months and it is a fantastic product that genuinely improves the search experience.
So overall I’m quite happy they made these investments. When you look at Google and Perplexity this is largely the direction the industry is going.
They’re building tools on other LLMs and basically running open router or something behind the scenes. They even show you your token use/cost against your allowance/budget in the billing page so you know what you’re paying for. They’re not training their own from-scratch LLMs, which I would consider a waste of money at their size/scale.
We're not running on openrouter, that would break the privacy policy.
We get specific deals with providers and use different ones for production models.
We do train smaller scale stuff like query classification models (not trained on user queries, since I don't even have access to them!) but that's expected and trivially cheap.
Do you have any evidence that the AI efforts are not being funded by the AI product, Kagi Assistant? I would expect the reverse: the high-margin AI products are likely cross-subsidizing the low-margin search products and their sliver of AI support.
High-margin AI products? Yes the world is just filled with those!
Our stuff is profitable.
Actuslly if you use LLMs sized responsibility to the task it's cheaper than a lot of APIs for the final product.
The expensive LLMs are expensive, but the cheap ones are cheaper than other infrastructure in something like quick answer or quick assistant
We're explicitly conscious of the bullshit problem in AI and we try to focus on only building tools we find useful. See position statement on the matter yesterday:
https://blog.kagi.com/llms
> LLMs are bullshitters. But that doesn't mean they're not useful
> Note: This is a personal essay by Matt Ranger, Kagi’s head of ML
I appreciate the disclaimer, but never underestimate someone's inability to understand something, when their job depends on them not understanding it.
Bullshit isn't useful to me, I don't appreciate being lied to. You might find use in declaring the two different, but sufficiently advanced ignorance (or incompetence) is indistinguishable from actual malice, and thus they should be treated the same.
Your essay, while well written, doesn't do much to convince me any modern LLM has a net positive effect. If I have to duplicate all of it's research to verify none of it is bullshit, which will only be harder after using it given the anchoring and confirmation bias it will introduce... why?
Your words don't match your actions.
And to be clear you shouldn't build the tools that YOU find useful, you should build the tools that your users, which pay for a specific product, find useful.
You could have LLMs that are actually 100% accurate in their answers that it would not matter at all to what I am raising here. People are NOT paying Kagi for bullshit AI tools, they're paying for search. If you think otherwise, prove it, make subscriptions entirely separate for both products.
Kagi founder here. We are moving to a future where these subscriptions will be separate. Even today more that 80% of our members use Kagi Assistant and our other AI-supported products so saying "people are NOT paying Kagi for bullshit AI tools" is not accurate, mostly in the sense that we are not in the business of creating bullshit tools. Life is too short for that. I also happen to like Star Trek version of the future, where smart computers we can talk to exist. I also like that Star Trek is still 90% human drama, and 10% technology quitely working in the background in service of humans - and this is the kind of future I would like to build towards and leave for my children. Having the most accurate search in the world that has users' best interest in mind is a big part of it, and that is not going anywhere.
edit: seeing the first two (negative) replies to my comment made me smile. HN is tough crowd to please :) The thing is similar to how I did paid search and went all in with my own money when everyone thought I was crazy, I did that out of own need and need for my family to have search done right and am doing the same now with AI, wanting to have it done right as a product. What you see here is the result of this group of humans that call themself Kagi best effort - not more, not less.
I found Kagi quite recently, and after blowing through my trial credits, and now almost blowing through my low tier (300) credits, I'm starting to look at the next tier up. However, it's approaching my threshold of value vs price.
I have my own payment methods for AI (OpenWebUI hosted on personal home server connected to OpenRouter API credits which costs me about $1-10 per month depnding on my usage), so seeing AI bundled with searches in the pricing for Kagi really just sucks the value out of the main reason I want to switch to Kagi.
I would love to be able to just buy credits freely (say 300 credits for $2-3) and just using them whenever. No AI stuff, no subscription, just pay for my searches. If I have a lull in my searches for a month, then a) no extra resources from Kagi have been spent, and b) my credits aren't used and rollover. Similarly, if I have a heavy search month, then I'll buy more and more credits.
I just don't want to buy extra AI on top of what I already have.
> We are moving to a future where these subscriptions will be separate. Even today more that 80% of our members use Kagi Assistant and our other AI-supported products so saying "people are NOT paying Kagi for bullshit AI tools" is not accurate, mostly in the sense that we are not in the business of creating bullshit tools.
For what it's worth, as someone who tends to be pretty skeptical of introducing AI tools into my life, this statistic doesn't really convince me much of the utility of them. I'm not sure how to differentiate this from selection bias where users who don't want to use AI tools just don't subscribe in the first place rather than this being a signal that the AI tools are worthwhile for people outside of a niche group who are already interested enough to pay for them.
This isn't as strong a claim as what the parent comment was saying; it's not saying that the users you have don't want to be paying for AI tools, but it doesn't mean that there aren't people who are actively avoiding paying for them either. I don't pretend to have any sort of insight into whether this is a large enough group to be worth prioritizing, but I don't think the statement of your perspective here is going to be particularly compelling to anyone who doesn't already agree with you.
> I also happen to like Star Trek version of the future, where smart computers we can talk to exist [...], this is the kind of future I would like to build towards
Well if that doesn't seal the deal in making it clear that Kagi is not about search anymore, I don't know what does. Sad day for Kagi search users, wow!
> Having the most accurate search in the world that has users' best interest in mind is a big part of it
It's not, you're just trying to convince yourself it is.
I can't really do anything with the recommendation you're making.
The recommendation you made worked from your personal preference as an axiom.
The fact is that the APIs in search cost vastly more than the LLMs used in quick answer / quick assistant.
If you use the expensive AI stuff (research assistant or the big tier 1 models) that's expensive. But also: it is in a separate subscription, the $25/month one.
We used not to give any access to the assistant at the $5 and $10 tier, now we do, it's a free upgrades for users.
I really wish Kagi would focus on search and not waste time and money on slop.
What they saying in this post is that they are designing these LLM-based features to support search.
The post describes how their use-case is finding high quality sources relevant to a query and providing summaries with references/links to the user (not generating long-form "research reports")
FWIW, this aligns with what I've found ChatGPT useful for: a better Google, rather than a robotic writer.
I'm sure Google also says they built "AI mode" to "support search".
Their search is still trash.
Except the AI mode filters out the bad results for you :)
I have a no-AI mode that filters out the bad results too. The problem is that it doesn't return any results at all, as it doesn't help with the harder problem of filtering out only the bad results without the good ones though. So far it's not clear to me that LLMs have significantly moved the needle on the ability to differentiate this.
If you look at my post history, I’m the last person to defend LLMs. That being said, I think LLMs are the next evolution in search. Not what OpenAI and Anthropic and xAI are working on - I think all the major models are moving further and further away from that with the “AI” stuff. But the core technology is an amazing way to search.
So I actually find it the perfect thing for Kagi to work with. If they can leverage LLMs to improve search, without getting distracted by the “AI” stuff, there’s tons of potential value,
Not saying that’s what this is… but if there’s any company I’d want playing with LLMs it’s probably Kagi
A better search would be rich metadata and powerful filter tools, not result summarizer. When I search, I want to find stuff, I don’t want an interpretation of what was found.
This is building on top of the existing core product, so the output is directly tied to the quality of their core search results being fed into the assistants. I overall really enjoy all of their A.I products, using their prompt assistant frequently for quick research tasks.
It does miss occasionally, or I feel like "that was a waste of tokens" due to a bad response or something, but overall I like supporting Kagi's current mission in the market of AI tools.
Same, though in fairness as long as they don't force it on me (the way Google does) and as long as the real search results don't suffer because of a lack of love (which so far they haven't), then it's no skin off my back. I think LLMs are an abysmal tool for finding information, but as long as the actual search feature is working well then I don't care if an LLM option exists.
It's not -- this was posted literally yesterday as a position statement on the matter (see early paragraphs in OP):
https://blog.kagi.com/llms
Kagi is treating LLMs as potentially useful tools to be used with their deficiencies in mind, and with respect of user choices.
Also, we're explicitly fighting against slop:
https://blog.kagi.com/slopstop
Is there anyone selling LLM tools that would claim they aren't keeping their deficiencies in mind or admit that they're ignoring user choices? I'm not saying you are or aren't wasting money on slop, because I have no way of knowing, but it's hard to imagine someone who is concerned about a company acting in bad finding this compelling.
Kagi is already expensive for a search engine. Now I know part of my subscription is going towards funding AI bullshit. And I know the cost of that AI bullshit will get jacked up in price and force Kagi sub price up as well. I'm so tired of AI being forced into everything.
These are only available on the Ultimate tier. If (like me) you don't care about the LLMs then there is no reason to be on the Ultimate tier so you don't pay for it.
>expensive for a search engine.
As in, not "free"?
Either way, I guess we'll see how this affects the service.
Not for nothing, but I wish there was an anonymized ai built into a kagi that was able to have normal conversation discussion about sexual topics or search for pornographic topics like a safe search off function.
I understand the safety needs around things LLM should not build nuclear weapons, but it would be nice to have a frontier model that could write or find porn.
You'll want de-censored models like cydonia for that -- can be found on openrouter, or through something like msty