A question I have: do you have a background in capital markets?
Because what it looks like is you asked Claude to create a stock picking newsletter based on signals that retail commonly thinks are bleeding edge but institutions have been using for two decades and probably don’t give much edge anymore (AI analysis of news sentiment…)
And you backtested from 2012-2025, one of the longest bull runs in history. Of course your strategy of “buy NVDA” will win in that window.
Which makes me wonder: have you ever worked at a hedge fund?
I have to say I’m tempted to subscribe and have Claude manage an imaginary portfolio based on your recommendations and publish that. Because to be frank, I’m fairly confident your recs won’t generate alpha.
I run a free comparison site in a different vertical, and the "publishes its losses" line is what made me click. Methodology page on a niche aggregator earns its keep two ways: readers can check you didn't just rank things by vibes, and it's about the only piece of content that holds up once AI summaries start chewing through your other pages. Question I keep coming back to though: how often do you actually update the methodology vs quietly nudge it? The discipline of versioning a formula is harder than writing it, because when a result comes out wrong the temptation to move the threshold instead is huge.
Hi, Donal from JSS here. We're on R27, revision 27 of the signal weights and features. Each revision gets snapshotted as a "golden" version in config, run through a full backtest, and the results pages pull dynamically from that snapshot, so the numbers are always anchored to a specific revision.
The round numbering is partly for exactly the reason you named: it forces a name onto every change. When a result comes out wrong, the temptation to quietly shift a threshold is real, and having to call it R28 and re-run the full validation raises the cost of doing that on a whim.
Perhaps a changelog would close the loop though? Right now, R27 is visible in config and referenced in the metrics, but there's no page that says "R27 changed X because Y, here's what the backtest/walk-forward showed before and after." That's the missing accountability layer, and probably more useful to a skeptical reader than any amount of methodology prose.
On ESG/SRI: fair, excluding sectors comes at a cost, and we make that trade-off knowingly.
On stock picking: the system is rules-based and mechanical, not discretionary. The "folly" argument applies most strongly to human judgment calls, which this attempts to remove. I literally wanted to reduce bias and get a better vantage point.
On beating the index: 14 years of backtested data with walk-forward validation suggest it's possible for this specific strategy. Whether it holds going forward, nobody knows. We publish the ten best and worst precisely because we're not claiming certainty.
As Matt Levine often points out, there are two possible cases for ESG
1/ This will bring worse returns, but I'm willing to accept the loss in order to forward values I support
2/ This will bring better returns, since the market underrates risks from bad ESG companies (e.g. the long-term return on capital for coal companies will be worse than the market expects)
People marketing ESG funds (or anti-ESG, same rule applies) usually emphasise the second.
> Anyone claiming they can consistently beat any large index is just delusional, aren't they?
This is obviously not true. RenTech would like a word.
https://jumpstartsignal.com/reports/2026/04/24/
> No growth met the screening criteria on this date.
Our current economy in a nutshell, lol
That's the pipeline being honest :) no forced picks on a slow day.
A question I have: do you have a background in capital markets?
Because what it looks like is you asked Claude to create a stock picking newsletter based on signals that retail commonly thinks are bleeding edge but institutions have been using for two decades and probably don’t give much edge anymore (AI analysis of news sentiment…)
And you backtested from 2012-2025, one of the longest bull runs in history. Of course your strategy of “buy NVDA” will win in that window.
Which makes me wonder: have you ever worked at a hedge fund?
I have to say I’m tempted to subscribe and have Claude manage an imaginary portfolio based on your recommendations and publish that. Because to be frank, I’m fairly confident your recs won’t generate alpha.
I run a free comparison site in a different vertical, and the "publishes its losses" line is what made me click. Methodology page on a niche aggregator earns its keep two ways: readers can check you didn't just rank things by vibes, and it's about the only piece of content that holds up once AI summaries start chewing through your other pages. Question I keep coming back to though: how often do you actually update the methodology vs quietly nudge it? The discipline of versioning a formula is harder than writing it, because when a result comes out wrong the temptation to move the threshold instead is huge.
Hi, Donal from JSS here. We're on R27, revision 27 of the signal weights and features. Each revision gets snapshotted as a "golden" version in config, run through a full backtest, and the results pages pull dynamically from that snapshot, so the numbers are always anchored to a specific revision.
The round numbering is partly for exactly the reason you named: it forces a name onto every change. When a result comes out wrong, the temptation to quietly shift a threshold is real, and having to call it R28 and re-run the full validation raises the cost of doing that on a whim.
Perhaps a changelog would close the loop though? Right now, R27 is visible in config and referenced in the metrics, but there's no page that says "R27 changed X because Y, here's what the backtest/walk-forward showed before and after." That's the missing accountability layer, and probably more useful to a skeptical reader than any amount of methodology prose.
ESG is just giving up returns for a good feeling, isn't it?
Stock picking is just folly for individual investors, isn't it?
Anyone claiming they can consistently beat any large index is just delusional, aren't they?
On ESG/SRI: fair, excluding sectors comes at a cost, and we make that trade-off knowingly.
On stock picking: the system is rules-based and mechanical, not discretionary. The "folly" argument applies most strongly to human judgment calls, which this attempts to remove. I literally wanted to reduce bias and get a better vantage point.
On beating the index: 14 years of backtested data with walk-forward validation suggest it's possible for this specific strategy. Whether it holds going forward, nobody knows. We publish the ten best and worst precisely because we're not claiming certainty.
As Matt Levine often points out, there are two possible cases for ESG
1/ This will bring worse returns, but I'm willing to accept the loss in order to forward values I support
2/ This will bring better returns, since the market underrates risks from bad ESG companies (e.g. the long-term return on capital for coal companies will be worse than the market expects)
People marketing ESG funds (or anti-ESG, same rule applies) usually emphasise the second.
> Anyone claiming they can consistently beat any large index is just delusional, aren't they?
This is obviously not true. RenTech would like a word.
Site screams jank and hijacks browsing away. 0/10, def slop daddy slop.