I would love it if it could run against archived messages (`in:anywhere -in:trash -in:spam`) ... I've been archiving all email for a very long time and being able to run stats and purge it would do wonders.
Now shipped: --scope anywhere is now live — scans all mail (inbox, archived, sent), not just inbox. Works with stats, purge, and sync. Thanks for the push
I would love to use this on my iCloud mailbox. It seems odd to use the Gmail API instead of IMAP. Hopefully that becomes supported in the future because the project seems great.
Nice approach. Confidence scoring on what's the safe one to delete is smart, and that's the hardest part of any cleanup tool. How are you handling false positives?
I've been thinking about similar confidence scoring in a different domain (security) and the calibration is really tricky when the cost of getting it wrong is high.
Why thousands? You never read or delete all your emails within a day?
My inbox, which I have for almost two decades only has 28 emails in it. Not 28 unread emails, but 28 total emails. I delete everything within a day of receiving, except for every important things, hence why 28 of them still remain.
Keeping thousands of emails in your inbox, while virtually free, is an attack vector for hackers, and also a gold mine for advertisement brokers who pay email providers money to show you ads based on your daily habits.
I am not saying I'm right, I'm just explaining how it got this bad.
See I used to have 2 MB on my hot mail and 4 MB on my Yahoo! Mail. I used to do exactly what you said. Then, I got invitation to Google mail. 1GB and counting!
I got lazy. I no longer had to delete mail anymore. So, it started accumulating. There. That's the whole story.
28 important emails in 20 years? Would the information in those emails had gotten to you via a different vector if you did not have email? This sounds like a case for not having email.
Did you really use a LLM to generate the sample output in your readme instead of just running the application? I noticed the borders were all misaligned and wondered if you had hardcoded the number of spaces, but I looked at the code and you haven't.
If you did generate the output with a LLM instead of just running it... why?
Also:
> It uses Claude AI for smart classification, but runs entirely locally: your emails never leave your machine.
How can both of these things be true? How can Claude be used as a classifier without sending your emails to Claude? From looking at the code it appears that you do in fact just send off emails to Claude, or at least the first 300-400 characters, so that line is just a complete lie.
The share text and README have been updated to accurately say "Core cleanup runs locally — AI commands send only subjects/snippets to Anthropic." The terminal sample outputs were illustrative; I'm recording a real asciinema session to replace them. PR #8 landed the README fix
I think the idea is that SOME of the classification (the "stats" command) works without AI, but it also supports some fancy and definitely-not-local Anthropic processing options.
I would love it if it could run against archived messages (`in:anywhere -in:trash -in:spam`) ... I've been archiving all email for a very long time and being able to run stats and purge it would do wonders.
Now shipped: --scope anywhere is now live — scans all mail (inbox, archived, sent), not just inbox. Works with stats, purge, and sync. Thanks for the push
This is a great call. you're right that a lot of the "hidden bloat" sits in archived mail, not the inbox.
Right now mailtrim only looks at inbox by default, but adding support for something like:
is very doable.Would you expect this as: 1) a flag (e.g. --all-mail) 2) or the default behavior?
Happy to prioritize this if it's useful. Feels like it would surface way more interesting results.
Happy to help with setup if anyone tries it — GCP step is the only slightly annoying part right now.
I would love to use this on my iCloud mailbox. It seems odd to use the Gmail API instead of IMAP. Hopefully that becomes supported in the future because the project seems great.
Totally fair. This is probably the biggest limitation right now.
I started with the Gmail API because: - better performance vs IMAP for large mailboxes - easier access to size metadata per message
That said, IMAP support is something I definitely want to add. Especially for iCloud/Outlook users.
If I did add IMAP, would you be okay with: - slower scans - slightly less accurate size estimates
Or is parity with Gmail important?
Very cool! Congratulations on putting this together.
Was also tinkering with Gmail bloat but, admittedly, with a less ambitious approach. Definitely going to give it a try.
Thanks! The Gmail API setup is the only friction. Once Mailtrim auth completes, it's just two commands. Let me know if you hit anything.
Nice approach. Confidence scoring on what's the safe one to delete is smart, and that's the hardest part of any cleanup tool. How are you handling false positives? I've been thinking about similar confidence scoring in a different domain (security) and the calibration is really tricky when the cost of getting it wrong is high.
[dead]
Why thousands? You never read or delete all your emails within a day?
My inbox, which I have for almost two decades only has 28 emails in it. Not 28 unread emails, but 28 total emails. I delete everything within a day of receiving, except for every important things, hence why 28 of them still remain.
Keeping thousands of emails in your inbox, while virtually free, is an attack vector for hackers, and also a gold mine for advertisement brokers who pay email providers money to show you ads based on your daily habits.
I am not saying I'm right, I'm just explaining how it got this bad.
See I used to have 2 MB on my hot mail and 4 MB on my Yahoo! Mail. I used to do exactly what you said. Then, I got invitation to Google mail. 1GB and counting!
I got lazy. I no longer had to delete mail anymore. So, it started accumulating. There. That's the whole story.
28 important emails in 20 years? Would the information in those emails had gotten to you via a different vector if you did not have email? This sounds like a case for not having email.
OP is aiming to help a quite common problem. Curious: how many others have you met with as spare of an email inbox as yours?
[dead]
Did you really use a LLM to generate the sample output in your readme instead of just running the application? I noticed the borders were all misaligned and wondered if you had hardcoded the number of spaces, but I looked at the code and you haven't.
If you did generate the output with a LLM instead of just running it... why?
Also:
> It uses Claude AI for smart classification, but runs entirely locally: your emails never leave your machine.
How can both of these things be true? How can Claude be used as a classifier without sending your emails to Claude? From looking at the code it appears that you do in fact just send off emails to Claude, or at least the first 300-400 characters, so that line is just a complete lie.
The share text and README have been updated to accurately say "Core cleanup runs locally — AI commands send only subjects/snippets to Anthropic." The terminal sample outputs were illustrative; I'm recording a real asciinema session to replace them. PR #8 landed the README fix
I think the idea is that SOME of the classification (the "stats" command) works without AI, but it also supports some fancy and definitely-not-local Anthropic processing options.
[dead]
Curious if others have noticed the same pattern — a few senders making up most of the inbox.
Very useful!
[dead]
[dead]