This feels like the peak of resume driven development. The maker of this has taken a deterministic problem (substring matching transaction descriptions) that could be solved with a 50-line Python script or a standard .rules file and injected a non-deterministic, token-burning probability engine into the middle of it.
I'll stick to hledger and a regex file. At least I know my grocery budget won't hallucinate into "Consulting Expenses" because the temperature was set too high.
> a deterministic problem (substring matching transaction descriptions) that could be solved with a 50-line Python script
I don't know about your bank transactions, but at least in my case the descriptions are highly irregular and basically need a hardcode for each and every damn pos location across each of the store location across each vendor.
I attempted that (with a Python script), gave up, and built myself a receipt tracker (photo + gemini based ocr) instead, which was way easier and is more reliable, even though - oh the horror! - it's using AI.
Isn't this suitable for a Bayesian classifier? Label some data (manual + automation using substrings) and use that to train a classifier and then it should be able to predict things for you fairly well.
There's a new feeling that I experience when using LLMs to do work. It's that every run, every commit has a financial cost (tokens). Claude code can write a nice commit message for me but it will cost me money. Alternatively, I can try to understand and write the message myself.
Perhaps the middle ground is to have the LLM write the classifier and then just use that on the exported bank statements.
> Isn't this suitable for a Bayesian classifier? Label some data (manual + automation using substrings) and use that to train a classifier and then it should be able to predict things for you fairly well.
Sure, maybe, if I label enough data? At the number and variety of transactions I do, it wouldn't be be much better than hardcoding.
> It's that every run, every commit has a financial cost (tokens).
Ballpark figure for total cost for the Gemini OCRs for now (for me and a few hundred other people who have downloaded the app), for the past 6 or so months, is a few minutes of my hourly rate.
> "Perhaps the middle ground is to have the LLM write the classifier..."
There was a time when I'd read this comment and then go looking for a tutorial on building a basic "Bayesian classifier". Invariably, I'd find several, which I'd line up like soldiers in tabs, and go through them until I find one that explained the what, why and how of it that spoke to my use (or near enough).
Of course, now ChatAI does all that for you in several Chat sessions. One does wonder though, if Chat is trained on text, and that was the history of what text was available, 10 years from now after everyone stopped writing 10 Blog posts about the same "Bayesian classifier", where's the ChatAI fodder coming from? I don't even know if this would be an outcome of fewer Blog posts [1]. It just strikes me as interesting because that would be _a very slow process_.
[1]: Not that this is necessarily true. People write blogs for all sorts of reasons, and having knockout quality competition from ChatAI does not KO all of them.
Getting LLM to write the classifier should be the way to go.
That’s what I mostly do, I give it some examples ask to write code to handle stuff.
I don’t just dump data into LLM and ask for results, mostly because I don’t want to share all the data and I make up examples. But it also is much cheaper as once I have code to process data I don’t have to pay for processing besides what it costs to run that code on my machine.
I think that's what GnuCash does by default. Even with years of past transaction data it still gets some very obvious matches wrong for me. In my experience it's about 90% accurate for the ones it really should be able to do based on the training data.
> "...it's about 90% accurate for the ones it really should be able to do based on the training data."
What's the pathway for the remaining 10%? Are they simply misclassified, and dropped into a queue for manual labeling? Do the outliers get managed by the GnuCash? Or do they get dumped into a misc 9000 account?
That dude is a Distinguished Engineer at Microsoft, doesn't need your "resume driven" label, his resume is good enough already.
Why don't you accept it as, dude is experimenting and learning new tool, how cool is that, if this is possible, what else can I build with these tools?
May be not resume driven. But hearing MS and AI, I can't help but wonder if this is result of one of those mandates by "leadership" where everyone is forced to come up with a AI use case or hack.
isn't this is exactly the point of innovation and mandates?
"leadership" or real leaders, want people to experiment a lot, so some of them will come up with novel ideas and either decide to build it on their own and get rich or build internally and make company rich.
Not always, but in many cases when someone becomes rich with innovation, it is probably because there was a benefit to a society (excluding gambling, porn, social media addictions)
The pressure at those levels is even higher, as it is an unsaid expectation of sorts that LLMs represent the cutting edge of technology, so principals/DEs must use it to show that they're on the top of the game.
No idea if this is true but very sad if it is. This is a great argument for the concept of tenure, so experts can work on what they as experts deem important instead of being subject to the whims of leadership. I, probably naively pictured Distinguished Engineer to be closer to that, but maybe not.
Sadly, yes, it's true. New AI projects are getting funded and existing non-AI projects are getting mothballed. It's very disruptive and yet another sign of the hype being a bubble. Companies are pivoting entirely to it and neglecting their core competencies.
fair, but it doesn't mean some of them are genuinely experimenting and figuring out interesting ways to use LLMs, some examples I personally love and admire
* simonw - Simon Willison, he could just continue building datasette or help Django, but he started exploring LLMs
In theory, yes. In practice the shit data you are working with (descriptions that are one or two words or the same word with ref id) really benefit from a) an agent that understands who you are and are likely spending money on b) has access to tool calls to dig deeper into what `01-03 PAYPAL TX REFL6RHB6O` actually is by cross referencing an export of PayPal transactions.
I think the smarter play is having an agent take the first crack at it, and build up a high confidence regex rule set. And then from there handle things that don't match and do periodic spot checks to maintain the rule set.
> The maker of this has taken a deterministic problem (substring matching transaction descriptions) that could be solved with a 50-line Python script
I had a coding agent write this for me last week. :D
It takes an excel export of my transactions that I have to download obviously since no bank is giving out API access to their customers. It uses some python pandas and excel stuff and streamlit to classify the transactions and "tally" results and show on the screen as color coded tabular data. (Streamlit seems really nice but super super limited in what it can do.) It also creates an excel file (with same color coding) so I can actually open it up and check if necessary. This excel file has dropdowns to reclassify rows. The final excel file also has formulas in place to update live. Code can also compare its own programmatic calculations with the result from formulas from excel. Why not? My little coding sweatshop never complains. (All with free models and clis by the way. I haven't had a reason to try Claude yet.)
> "...no bank is giving out API access to their customers..."
I think Citi bank has an API (https://developer.citi.com/). Not that it's public for account holders, but third-parties can build on that. I'm looking at Plaid.com
One thing about Plaid--I've not been happy when encountering Plaid in the wild. For example, when a vendor directs me to use Plaid to validate a new bank account integration. I'd much rather wait a few days and use the 0.01 deposit route. Like using a katana for shaving.
But signing up to use Plaid for bank transaction ingestion via API is a whole different matter.
Yeah, a pattern like „do the heavy lifting with cheap regexes, and every 100 line items, do one expensive LLM run comparing inputs, outputs, and existing regexes to fine-tune the regexes“.
Charitably, this is a very naive take for unstructured bank / credit card transactions. Even if you use a paid service for elaboration you will not write a 50-line, or even 500-line, list of declarative rules to solve this problem.
This is an actual hard problem he is trying to fix. 50-line python? Pfff..! My current personal 400-line rule script begs to differ not to mention the PAIN of continuously maintaining it. I was looking into using AI to solve the same problem but now I can just plug and play.
Consumer AI product posted on a weekend during prime European hours. Brace yourselves!
Actually I would consider this setup to not be very user friendly. This makes a lot of assumptions about the data/format you have available already. Personally I would assume that anything operating on my bank transactions would be through some more turnkey/handsoff integration rather than a direct import.
Puzzle basically does this by hooking directly into my bank and gives me other tools where I can easily use the categorizations
Well, sure. Accounting solution. But in a post about scripting your way to financial fun and games (and I think there are a fair share of people here who are locked into their accounting platforms/apps for various reasons), what solution does the API call to the bank (unavailable to the small player) and then gives you an API endpoint to get cleaned data into whichever accounting solution you happen to be using? Puzzle.io ain't going to do it at any price.
whats up with negativity? its a nice tool, code is opensourced here https://github.com/davidfowl/tally and while the argument of deterministic solution is valid, ai is more suitable for this task as task itself is ill defined and deterministic solution wont be able to cover all cases.
Why does LLM generated websites feel so "LLM generated".
Its like a bootstrap css just dropped. People still giving "minimum effort" into their vibe code/eng projects but slap a domain on top. Is this to save token cost ?
This skill demonstrates how to tell an agent to make a non-generic website [1].
These are the money lines:
NEVER use generic AI-generated aesthetics like overused font families
(Inter, Roboto, Arial, system fonts), cliched color schemes
(particularly purple gradients on white backgrounds), predictable
layouts and component patterns, and cookie-cutter design that lacks
context-specific character.
Interpret creatively and make unexpected choices that feel genuinely
designed for the context. No design should be the same. Vary between
light and dark themes, different fonts, different aesthetics. NEVER
converge on common choices (Space Grotesk, for example) across
generations.
they feel like that because people building them are not generally designers and they don't care about novelty or even functionality as long as it looks pleasing to the eye. Most of them probably include "make it look pretty" etc in the prompt and LLMs will naturally converege to a common idea for what is "pretty" and apparently that is purple gradients in everything and if you don't have taste, you can't tell any better, beacuse you're doing a job you don't fundamentally understand.
Because their goal isn’t to build a website, but to promote and share their product. Why would anyone invest more time than necessary in a tangential part of the project?
Fair enough. I did not see this as a promotion of the product and more of as a show experimental side project. But if they really want to promote the product, the llm design isnt helping giving any confidence. A blog post would have sufficed.
It's because LLM tools have design guidelines as a part of the system prompt which makes everything look the same unless you explicitly tell it otherwise.
To give an example that annoys me to no end, Google's Antigravity insists on making everything "anthropomorphic", which gets interpreted as overtly rounded corners, way too much padding everywhere, and sometimes even text gradients (huge no-no in design). So, unless you instruct it otherwise, every webpage it creates looks like a lame attempt at this: https://m3.material.io/
To be honest if it were my software I'd probably give it a "Prof. Dr." style page and call it a day, then get called out on Hackernews with "haven't you heard of CSS? It's 2025, you actually want to entice people to use your software, don't you?" or similar.
I'd hazard a guess that it's based on what the LLM can "design" without actually being able to see it or have taste and it still reliably look fine to humans.
Old, retired developer here. I'm interested in tally for my use-case but don't want to spend any money or little money on the AI. What are my free cheap AI options?
Gemini (Google) has a good-enough-for-personal-projects free tier. Openrouter has free models (I'm assuming the training data is valuable enough to keep this going for a while)
I just went through this with my app https://ledga.us/ starting with merchant codes and my own custom rules. It catches national entities, but local ones usually fall through the cracks. You really don’t need AI, but it is pretty.
Some friends built a whole company around this problem, it’s actually pretty difficult to resolve, with lots of edge cases, especially if you are handling multiple banks and lots of customers with slightly different needs
This tools looks pretty nice, kudos for building it and putting it out there for others to try it
I wrote this in Raku… sorry in a private repo since just for personal use.
I tried to use LLM::Function for category matching and, in my brief excursion that way found that submitting bank description strings to LLMs is pretty much the antithesis of what they are good at.
My solution does regex, then L-D lexical distance, then opens the Raku repl for manual fixes…
<<Catmap run tries to do an exact match to the description and auto apply a category. Failing that, it does a levenshtein / damarau lexical distance and proposes a best match category, prompting for [CR]. Or you can override and assign to an existing category. Go cats at the prompt to get a list of active categories.>>
I wanted to create a similar tool. Then turned out that Claude Code is all I need, both for crunching data (even though export had issues) and visualizating it. And it was bach when Sonnet 4 was the strongest model.
Looks cool! I don't understand the negativity here. If you really think a general Python script can be easily written to solve this problem, I invite you to actually try and write that script.
I actually just vibed a hyper-specific version of a similar tool for myself a couple weeks ago, mostly just for fun to see if I (or Claude) could. Took about an hour, and it's now able to automate the spreadsheet process my girlfriend and I use each month to split certain expenses. Saves us each ~15 minutes weekly.
I'm loving the ability LLMs provide to both build personal software so rapidly, as well as solve these kinds of fuzzier natural language problem spaces so relatively easily.
Side note: the state of consumer transaction reporting is absolute garbage. There should really be more metadata mandated by consumer protection regs or something. The fact that this is a hard problem in the first place feels very dumb.
Adjacent: Our biz uses Quickbooks, and while I'm not a fan in general, its pattern matcher does a pretty good job of matching credit card transactions to expense categories and accounts.
I have no idea what the deterministic / probabilistic mix is under the hood.
This feels like the peak of resume driven development. The maker of this has taken a deterministic problem (substring matching transaction descriptions) that could be solved with a 50-line Python script or a standard .rules file and injected a non-deterministic, token-burning probability engine into the middle of it. I'll stick to hledger and a regex file. At least I know my grocery budget won't hallucinate into "Consulting Expenses" because the temperature was set too high.
> a deterministic problem (substring matching transaction descriptions) that could be solved with a 50-line Python script
I don't know about your bank transactions, but at least in my case the descriptions are highly irregular and basically need a hardcode for each and every damn pos location across each of the store location across each vendor.
I attempted that (with a Python script), gave up, and built myself a receipt tracker (photo + gemini based ocr) instead, which was way easier and is more reliable, even though - oh the horror! - it's using AI.
Isn't this suitable for a Bayesian classifier? Label some data (manual + automation using substrings) and use that to train a classifier and then it should be able to predict things for you fairly well.
There's a new feeling that I experience when using LLMs to do work. It's that every run, every commit has a financial cost (tokens). Claude code can write a nice commit message for me but it will cost me money. Alternatively, I can try to understand and write the message myself.
Perhaps the middle ground is to have the LLM write the classifier and then just use that on the exported bank statements.
> Isn't this suitable for a Bayesian classifier? Label some data (manual + automation using substrings) and use that to train a classifier and then it should be able to predict things for you fairly well.
Sure, maybe, if I label enough data? At the number and variety of transactions I do, it wouldn't be be much better than hardcoding.
> It's that every run, every commit has a financial cost (tokens).
Ballpark figure for total cost for the Gemini OCRs for now (for me and a few hundred other people who have downloaded the app), for the past 6 or so months, is a few minutes of my hourly rate.
Absolutely not worth the manual grind for me.
> "Perhaps the middle ground is to have the LLM write the classifier..."
There was a time when I'd read this comment and then go looking for a tutorial on building a basic "Bayesian classifier". Invariably, I'd find several, which I'd line up like soldiers in tabs, and go through them until I find one that explained the what, why and how of it that spoke to my use (or near enough).
Of course, now ChatAI does all that for you in several Chat sessions. One does wonder though, if Chat is trained on text, and that was the history of what text was available, 10 years from now after everyone stopped writing 10 Blog posts about the same "Bayesian classifier", where's the ChatAI fodder coming from? I don't even know if this would be an outcome of fewer Blog posts [1]. It just strikes me as interesting because that would be _a very slow process_.
[1]: Not that this is necessarily true. People write blogs for all sorts of reasons, and having knockout quality competition from ChatAI does not KO all of them.
Getting LLM to write the classifier should be the way to go.
That’s what I mostly do, I give it some examples ask to write code to handle stuff.
I don’t just dump data into LLM and ask for results, mostly because I don’t want to share all the data and I make up examples. But it also is much cheaper as once I have code to process data I don’t have to pay for processing besides what it costs to run that code on my machine.
>Isn't this suitable for a Bayesian classifier?
I think that's what GnuCash does by default. Even with years of past transaction data it still gets some very obvious matches wrong for me. In my experience it's about 90% accurate for the ones it really should be able to do based on the training data.
> "...it's about 90% accurate for the ones it really should be able to do based on the training data."
What's the pathway for the remaining 10%? Are they simply misclassified, and dropped into a queue for manual labeling? Do the outliers get managed by the GnuCash? Or do they get dumped into a misc 9000 account?
It shows you the automatic account matches on import, allowing you to double-check and correct any misclassified ones.
Ok. So what you're pointing to is not an automated pipeline, but a user mediated process. It's the same pattern in QuickBooks, or whatever ERP.
You can likely just take a small open weight language model and use it like a classifier quite easily.
IMHO the better middle ground is to use a nice (potentially fine tuned) small model locally, of which there are many now thanks to Chinese AI firms.
An expensive model can generate the training dataset
That dude is a Distinguished Engineer at Microsoft, doesn't need your "resume driven" label, his resume is good enough already.
Why don't you accept it as, dude is experimenting and learning new tool, how cool is that, if this is possible, what else can I build with these tools?
May be not resume driven. But hearing MS and AI, I can't help but wonder if this is result of one of those mandates by "leadership" where everyone is forced to come up with a AI use case or hack.
isn't this is exactly the point of innovation and mandates?
"leadership" or real leaders, want people to experiment a lot, so some of them will come up with novel ideas and either decide to build it on their own and get rich or build internally and make company rich.
Not always, but in many cases when someone becomes rich with innovation, it is probably because there was a benefit to a society (excluding gambling, porn, social media addictions)
Because there was a benefit for some shareholder somewhere, maybe.
The pressure at those levels is even higher, as it is an unsaid expectation of sorts that LLMs represent the cutting edge of technology, so principals/DEs must use it to show that they're on the top of the game.
No idea if this is true but very sad if it is. This is a great argument for the concept of tenure, so experts can work on what they as experts deem important instead of being subject to the whims of leadership. I, probably naively pictured Distinguished Engineer to be closer to that, but maybe not.
It's in the career framework of most big techs to use AI this year, so everyone is doing it to hold on to their bonuses.
Sadly, yes, it's true. New AI projects are getting funded and existing non-AI projects are getting mothballed. It's very disruptive and yet another sign of the hype being a bubble. Companies are pivoting entirely to it and neglecting their core competencies.
fair, but it doesn't mean some of them are genuinely experimenting and figuring out interesting ways to use LLMs, some examples I personally love and admire
* simonw - Simon Willison, he could just continue building datasette or help Django, but he started exploring LLMs
* Armin Ronacher
* Steve Yegge
and many more
Currently Microsoft is eliminating a lot of the useless fat in redundancy plans. So the crappy "resume driven" thinkg might be actually needed.
That sounds exactly like the type of person that would care about their resume.
Microsoft (the company with no noteworthy accomplishments within the past decades) is a metric for a resume being good now?
All they do is buy out companies and make a already finished product theirs.
Oh he's from Microsoft? That makes malarkey like this track so much more.
In theory, yes. In practice the shit data you are working with (descriptions that are one or two words or the same word with ref id) really benefit from a) an agent that understands who you are and are likely spending money on b) has access to tool calls to dig deeper into what `01-03 PAYPAL TX REFL6RHB6O` actually is by cross referencing an export of PayPal transactions.
I think the smarter play is having an agent take the first crack at it, and build up a high confidence regex rule set. And then from there handle things that don't match and do periodic spot checks to maintain the rule set.
> "...then from there handle things that don't match..."
Curious, what's the inputs for an agent when handling your dataset? What can you feed the agent so it can later "learn" from your _manual labeling_?
I think you don't use UPI or you would have understood the painpoint categorizing even with ai would be difficult.
The maker built NuGet. He don't need a resume.
But can he invert a binary tree?
> The maker of this has taken a deterministic problem (substring matching transaction descriptions) that could be solved with a 50-line Python script
I had a coding agent write this for me last week. :D
It takes an excel export of my transactions that I have to download obviously since no bank is giving out API access to their customers. It uses some python pandas and excel stuff and streamlit to classify the transactions and "tally" results and show on the screen as color coded tabular data. (Streamlit seems really nice but super super limited in what it can do.) It also creates an excel file (with same color coding) so I can actually open it up and check if necessary. This excel file has dropdowns to reclassify rows. The final excel file also has formulas in place to update live. Code can also compare its own programmatic calculations with the result from formulas from excel. Why not? My little coding sweatshop never complains. (All with free models and clis by the way. I haven't had a reason to try Claude yet.)
> "...no bank is giving out API access to their customers..."
I think Citi bank has an API (https://developer.citi.com/). Not that it's public for account holders, but third-parties can build on that. I'm looking at Plaid.com
One thing about Plaid--I've not been happy when encountering Plaid in the wild. For example, when a vendor directs me to use Plaid to validate a new bank account integration. I'd much rather wait a few days and use the 0.01 deposit route. Like using a katana for shaving.
But signing up to use Plaid for bank transaction ingestion via API is a whole different matter.
I've built and worked on this exact problem before at bigtech, startup and personal projects.
Regex works well if you have a very limited set of sender and recipient accounts that don't change often
Bayesian or DNN classifiers work well when you have labeled data.
LLMs work well when you have a lot of data from lots of accounts.
You can even combine these approaches for higher accuracy
It seems like using a model to create regexes that match your transactions might be worthwhile.
Yeah, a pattern like „do the heavy lifting with cheap regexes, and every 100 line items, do one expensive LLM run comparing inputs, outputs, and existing regexes to fine-tune the regexes“.
Charitably, this is a very naive take for unstructured bank / credit card transactions. Even if you use a paid service for elaboration you will not write a 50-line, or even 500-line, list of declarative rules to solve this problem.
I imagine a coding agent would be great at editing your regex file to maximize coverage.
Just like manually editing Sieve/Gmail filters: I want full determinism, but managing all of that determinism can be annoying…
This is an actual hard problem he is trying to fix. 50-line python? Pfff..! My current personal 400-line rule script begs to differ not to mention the PAIN of continuously maintaining it. I was looking into using AI to solve the same problem but now I can just plug and play.
And this feels like peak HN “why not just use a regex”.
This is a hard, for all intents and purposes non deterministic problem.
Now if you’ll excuse me I have to draft another post admonishing the use of
Can't wait for people building agent based grep or whatever else solved issues there are. AI people really need to touch grass.
grep and ripgrep are used by Claude Code; I suspect other agents are doing something similar.
Its not about whether the agent uses it but about some person building an agent based grep. It was a joke.
Dude, common, we won't be reaching "AGI" with that attitude. /s
Consumer AI product posted on a weekend during prime European hours. Brace yourselves!
Actually I would consider this setup to not be very user friendly. This makes a lot of assumptions about the data/format you have available already. Personally I would assume that anything operating on my bank transactions would be through some more turnkey/handsoff integration rather than a direct import.
Puzzle basically does this by hooking directly into my bank and gives me other tools where I can easily use the categorizations
> puzzle.io
Well, sure. Accounting solution. But in a post about scripting your way to financial fun and games (and I think there are a fair share of people here who are locked into their accounting platforms/apps for various reasons), what solution does the API call to the bank (unavailable to the small player) and then gives you an API endpoint to get cleaned data into whichever accounting solution you happen to be using? Puzzle.io ain't going to do it at any price.
whats up with negativity? its a nice tool, code is opensourced here https://github.com/davidfowl/tally and while the argument of deterministic solution is valid, ai is more suitable for this task as task itself is ill defined and deterministic solution wont be able to cover all cases.
Why does LLM generated websites feel so "LLM generated".
Its like a bootstrap css just dropped. People still giving "minimum effort" into their vibe code/eng projects but slap a domain on top. Is this to save token cost ?
This skill demonstrates how to tell an agent to make a non-generic website [1].
These are the money lines:
[1]: https://github.com/anthropics/claude-code/blob/main/plugins/...they feel like that because people building them are not generally designers and they don't care about novelty or even functionality as long as it looks pleasing to the eye. Most of them probably include "make it look pretty" etc in the prompt and LLMs will naturally converege to a common idea for what is "pretty" and apparently that is purple gradients in everything and if you don't have taste, you can't tell any better, beacuse you're doing a job you don't fundamentally understand.
Because their goal isn’t to build a website, but to promote and share their product. Why would anyone invest more time than necessary in a tangential part of the project?
Fair enough. I did not see this as a promotion of the product and more of as a show experimental side project. But if they really want to promote the product, the llm design isnt helping giving any confidence. A blog post would have sufficed.
It's because LLM tools have design guidelines as a part of the system prompt which makes everything look the same unless you explicitly tell it otherwise.
To give an example that annoys me to no end, Google's Antigravity insists on making everything "anthropomorphic", which gets interpreted as overtly rounded corners, way too much padding everywhere, and sometimes even text gradients (huge no-no in design). So, unless you instruct it otherwise, every webpage it creates looks like a lame attempt at this: https://m3.material.io/
To be honest if it were my software I'd probably give it a "Prof. Dr." style page and call it a day, then get called out on Hackernews with "haven't you heard of CSS? It's 2025, you actually want to entice people to use your software, don't you?" or similar.
Apparently Prof. Dr. style is the aesthetic name for pages with no custom styling of any kind, common at universities in the 01990s: https://contemporary-home-computing.org/prof-dr-style/
I'd hazard a guess that it's based on what the LLM can "design" without actually being able to see it or have taste and it still reliably look fine to humans.
Old, retired developer here. I'm interested in tally for my use-case but don't want to spend any money or little money on the AI. What are my free cheap AI options?
The models from China are good and very cheap (or free) on OpenRouter; I can recommend Qwen and Kimi K2.
Here's Qwen3 Coder [1]
[1]: https://openrouter.ai/qwen/qwen3-coder:free
Gemini (Google) has a good-enough-for-personal-projects free tier. Openrouter has free models (I'm assuming the training data is valuable enough to keep this going for a while)
You don't need AI for it.
You can just install the tool and use it. It is a CLI with very verbose output. Verbose output is good for both humans and AI.
I recently found out that my teenage son has about 20 Google accounts. Needed them for his homework he said.
That sounds very suspicious.
Unless he's using it for storage! haha. Then it's cheap.
Hmm, no, I don't think I want to tell OpenAI my banking details and transactions. (sorry, Google and Microsoft, you need to stay out, too)
Could one use local models for this? I'd assume they are good enough nowadays?!
I just went through this with my app https://ledga.us/ starting with merchant codes and my own custom rules. It catches national entities, but local ones usually fall through the cracks. You really don’t need AI, but it is pretty.
On your pricing page it is not clear what integrations Plaid has or if my banks are supported.
This is on Plaid’s site. I’ll add a link to it, but here it is:
https://plaid.com/docs/institutions/
Some friends built a whole company around this problem, it’s actually pretty difficult to resolve, with lots of edge cases, especially if you are handling multiple banks and lots of customers with slightly different needs
This tools looks pretty nice, kudos for building it and putting it out there for others to try it
I wrote this in Raku… sorry in a private repo since just for personal use.
I tried to use LLM::Function for category matching and, in my brief excursion that way found that submitting bank description strings to LLMs is pretty much the antithesis of what they are good at.
My solution does regex, then L-D lexical distance, then opens the Raku repl for manual fixes…
<<Catmap run tries to do an exact match to the description and auto apply a category. Failing that, it does a levenshtein / damarau lexical distance and proposes a best match category, prompting for [CR]. Or you can override and assign to an existing category. Go cats at the prompt to get a list of active categories.>>
Youch. I knew the rates of people criticizing-by-headline could be bad, but this one is rough.
Y'all, please actually read the homepage before dunking on someone's project...
Or:
> claude
“There’s a CSV at @path please open it and classify my bank transactions”.
This is not an AI tool, this is a CLI that has very verbose output and documentation.
It can be used by human or by AI agents.
I experiment the same with other mechanisms, and CLI are as effective - if not more effective - than MCP.
Granted, having access to AI I would use AI to run it. But nothing is stopping a manual, human centric, use.
I believe more tools should be written like that.
I wanted to create a similar tool. Then turned out that Claude Code is all I need, both for crunching data (even though export had issues) and visualizating it. And it was bach when Sonnet 4 was the strongest model.
Wasn't there an accounting software back in the 90s and early 2000s that had the same name?
Still exists, 40+ years: https://tallysolutions.com
Looks cool! I don't understand the negativity here. If you really think a general Python script can be easily written to solve this problem, I invite you to actually try and write that script.
I actually just vibed a hyper-specific version of a similar tool for myself a couple weeks ago, mostly just for fun to see if I (or Claude) could. Took about an hour, and it's now able to automate the spreadsheet process my girlfriend and I use each month to split certain expenses. Saves us each ~15 minutes weekly.
I'm loving the ability LLMs provide to both build personal software so rapidly, as well as solve these kinds of fuzzier natural language problem spaces so relatively easily.
Side note: the state of consumer transaction reporting is absolute garbage. There should really be more metadata mandated by consumer protection regs or something. The fact that this is a hard problem in the first place feels very dumb.
Without ERP integration, it's a home-use project.
Adjacent: Our biz uses Quickbooks, and while I'm not a fan in general, its pattern matcher does a pretty good job of matching credit card transactions to expense categories and accounts.
I have no idea what the deterministic / probabilistic mix is under the hood.
Really cool!
The original namesake Tally[0] is a very long-running ERP and accounting software (started in 1986!) used by thousands of businesses around the world.
[0] https://tallysolutions.com/tally-prime/
You could have picked any other name instead of 20yr old software company doing invoices lol
Anyone interested in Tallyai is far too young to know the old company
[dead]
[dead]