Way to undermine an interesting product launch through poorly chosen language:
> Let’s be frank the single‑player notebook has felt outdated for a while now. We’re open‑sourcing its successor. Jupyter belongs in the hall of great ideas — alongside “Hello, world.” and “View Source.”
If you're trying to reach out to the Python community this is not the way to do it. Completely unnecessary hostile language there! Have some respect.
My advice to Deepnote is to scrap this launch announcement (ideally with an apology) and try again. They've built something genuinely useful and interesting but it's going go get a lot less attention than it deserves if they introduce the open source version to the world like this.
The whole post feels like it was edited/modified by ChatGPT; `What we opened — in English, not a changelog`, `Why it matters (no fluff):`, `We are big believers in notebooks — full stop` are patterns that always make me feel like an LLM wrote it (sentence followed by a marketing qualifier).
I really liked Deepnote the product when I last used it, but the post definitely feels off.
This looks less like an "interesting" product, and more like a case of pivoting a commercial product that isn't making enough money into an open source one in the hope of at least gaining some credibility from all that work as well as undercutting the competition.
> They've built something genuinely useful and interesting
This core product has been around for years now. If it was that interesting and useful, more people would have likely paid for the original offering.
I would certainly recommend taking this 'release' with a touch more cynicism.
I saw this on LinkedIn earlier and literally closed it after “ Let’s be frank the single‑player notebook has felt outdated for a while now”
I think it must be messaging for “leadership” as opposed to practitioners, there are lots of real pain points but they don’t seem to be mentioning them
Completely agree. I don't know anyone who isn't financially incentivized to see the Jupyter project fail who feels this way about Jupyter. This whole post stinks of "we're losing to Jupyter so let's throw up a ridiculous Hail Mary".
I do not often make a point after upvoting but instead of writing more or less the same: this ^. If not for the open source, I would have closed the the page after that blurb thinking something is off and I do not need it.
The whole article felt very dishonest and frankly quite rude towards Jupyter. Self-declaring themselves to be the successor to a project that's still alive and that they seemingly have absolutely no legitimate claim to, and then going on to bash it by saying it's dying because job postings are decreasing and commits are decreasing, the latter point is especially dishonest considering Jupyter is already quite complete and fully featured so maybe it doesn't need constant daily commits?
Maybe they should focus less on bashing Jupyter and more of showing what's good about them, for example they stated multiple times that Jupyter is messy JSON but they never showed off their own format... Just some vague hand-wavy "perfect for AI!"
it’s what happens when people use LLMs to write their posts. Rubbishing Jupyter is an obvious choice if you’re a machine writing a compelling post. Rubbishing Jupyter if you’re a human being with a stake in the space is a terrible choice.
My constructive advice for deepnote: if you don’t have something to say from the heart, don’t ask an LLM to generate something for you. Write less, not more. For a post this important, an LLM is a terrible choice.
It reads like a sales call where they’re getting customer pushback and responding with something quantitative - not a good opener at all, especially for this audience. The GPT tone pushes it over the edge.
What they built looks great and I don’t disagree with their take in substance, but you get one chance to make your open source announcement good - don’t blow it like this.
Yes. This wording just misses the mark and sounds super tone deaf
Not sure that an apology is necessary though. Some overconfident marketing person tried something, and it failed. That's what happens if you try stuff. They should just try harder next time
I don't know much about this, but I understand Project Jupyter is Nonprofit. If I go to "jupyter.org" I see a tab "Community" and another "Governance". If I go to "deepnote.com" I see "Customers" and "Pricing".
Why would people want a standard to be controlled by a private company? I don't think the "Open-Sourcing" of it says enough. How does licensing work with formats or standards?
People don't want that. This article is largely empty marketing. Claiming they have "the successor" is all you need to read before you can infer it's hot air.
All standards are ultimately controlled by private companies. Even non-profits require funding.
Open source always depended on a viable business model (of one or many companies) that can sustain not just the release, but also an ongoing maintenance of the standard.
The problem with corporate control isn't that they require funding or they are private, the problem is they are motivated first by profit. Sometimes exclusively. So when "what's best" is at odds with "what's profitable", they tend to make the wrong choice.
Take this project for instance. If one day their choice is to forgo all future profits, or to close the source to continue operating, it's very likely they will close the source to continue operating, rather than forgoing profits. We've seen it happen enough to be wary from the project structure alone.
I’m not familiar with Deepnote, but I have quite a lot of experience with Jupyter, and if someone were to ask me if there are more modern alternatives I would immediately point them to marimo (https://marimo.io/). For me marimo is already a successor to Jupyter, it has replaced it entirely for me.
But doesn't marimo force certain workflows that jupyter does not? For example its website states that "Notebooks are executed in a deterministic order, with no hidden state — delete a cell and marimo deletes its variables while updating affected cells." This appeals to people doing traditional software development work in notebooks, but it breaks workflows where people use notebooks as notebooks, where state is entirely separate from the in-notebook presentation of cells. Do people using notebooks these days hate this fundamental feature of notebooks? It's the key reason why notebooks aren't just a transcript of a REPL session!
> Do people using notebooks these days hate this fundamental feature of notebooks?
It's one of the things that is the most confusing to people I've worked with. The idea that there is hidden state and you have to re-run cells to get variables to update (or variables still exist when you've deleted those cells) is quite confusing.
If you're trying to have a reproducible workflow, it can be difficult. Jupyter is no different from other notebooks in this regard (RStudio, for example will happily run code and keep variables around that you don't reference any longer in your .R or .Rmd files.)
But I see your point -- if you're using it as a long-term storage notebook, then this is the expected behavior. And you absolutely want to have "historical" data/results kept.
I generally think of / use notebooks as a way to make reports for analyses. So, I want to work with them, draft them, change them, put them in git, etc... then run them all at once to get my output. For me, having a reproducible, documented workflow is more important. I don't want state to be kept outside of those one-off runs. Really until your comment, I didn't understand the other side of the issue, so thanks!
Yup, Marimo seems perfectly gittable and Deepnote looks more of the status quo.
> Human-readable format: The .deepnote YAML format replaces .ipynb's messy JSON with clean, version-control and human-friendly structure for projects and notebooks. You can organize multiple notebooks, integrations, and settings into a single .deepnote project for better structure and collaboration.
I confess that picking yaml doesn't feel safe to me. As much as it annoys people, fenced formats are the way, here. And if you aren't careful, you are going to recreate XML. Probably poorly.
This is Akshay, the original creator of marimo. Our whole team has come over to CoreWeave. We're building a whole lot more, not less, and our number one priority continues to be the open-source. We're also growing the open-source team, i.e. we're hiring.
Thanks for the link. I'm not a subscriber to their blog and would otherwise not have known about this change that affects the recommendations I have been giving people.
I never really liked Jupyter. I've built a couple of "notebooks", as they have come to be known the days, much better than Jupyter and still switched to marimo, eventually. Quite similar to what I wanted to have.
> Teams need notebooks that are reactive, collaborative, and AI‑ready
reactive: this matters, but all the alternatives have it
collaborative: this matters very little in the Figma / Google Docs sense of collaborative in practice. It's very rare you want two people working on the same notebook at the same time. What you really want is git style version control.
AI‑ready: you want something as close to plain python (which is already as AI-ready as it gets) as possible.
if you're measuring across these dimensions, I'd go with marimo.
marimo is saved as plain .py files, easy to version control and has a reactive model.
There is plenty of AI extensions, but the experience matters. The depth of integration matters. When you execute queries against production warehouses and you make decisions based on the results of AI-generated code, accuracy matters. We had our first demo of an AI agent running in 2 days, it took us another 2 years to build the infrastructure to test it, monitor it, and integrate it into the existing data source.
You'd be surprised how many people collaborate together. Software engineering is solitary, collaboration happens in GitHub. But data analysis is collaborative. We frequently have 300+ people looking at the same notebook at the same time.
.py never worked for data exploration. You need to mix code, text, charts, interactive elements. And then you need to add metadata: comments, references to integrations, auth secrets. There are notebooks that are several pages long with 0 code. We are building a computational medium of the future and that goes beyond a plaintext file, no matter how much we love the simplicity of a plaintext file.
We seriously considered this, but decided against this. While elegant for demo projects, it doesn't scale for serious deployments. You still need to deal with secrets, metadata (lots of it), backwards-compatibility, and extensibility (we have 23 block types today, many more to come).
Claiming that the number of job postings mentioning Jupyter has decreased, so Jupyter is no longer popular is not something a company in the data space should do. It is just embarrassing.
And that graph they show has an offset y axis (hides the scale from 0 to exaggerate the "downward trend") _and_ has a non-uniform x axis. Each tick mark represents a different scale of time (1yr, 3 mos, 1 mo)
Not sure if this is sarcasm, but GP has good taste and when marketing this stuff to developers, you need good taste. Otherwise it just rings hollow, and doesn’t inspire enthusiasm.
>Meanwhile, the market is voting with its feet. Across the Fortune 1000, job postings that mention and require Jupyter knowledge are down sharply; the most recent month was deep in the red YTD.
I agree with this. I have no problem with a technical discussion comparing the features and discussing Jupyter’s shortcomings. But the Jupyter job postings and contribution graph screenshots felt both ill-spirited and not particularly relevant.
There was interesting stuff here: the human readable format, auto publication. But the tone and framing bashing a reliable open source project really turned me off.
Title and first paragraph make it sound like this is a project by the same people as (or endorsed by them) Jupyter. Apparently that's not the case and also it looks very similar to google colab so jupyter + better UI + some LLM integrations
> Take the UI you're used to from Deepnote Cloud and run it locally
> Edit notebooks with a local AI agent
> Bring your own keys for AI services
> Run your own compute
this announcement had such strong gpt-output vibe..
to the "writers": pleeease don't present unedited slop to me. i'm a human, if you want my attention consider using your own voice. i don't want to read what gpt thought would "market" you best.
but this thread did remind me of marimo so that's sweet
Notebooks aren't competing with scripts, they're competing with REPLs.
ML and scientific applications in particular tend to have segments that run for a long time, but then you'd like the resulting script to be in a state where you can mess with it, maybe display some multimedia output, etc, without re-running the long-running segment. Notebooks fit this need to a tee.
I've found that notebooks are great for ad hoc reporting and analysis scripts. Once you have your quick and dirty script, it is trivial to convert to a notebook and you get a lot for little. Being able to change one cell and rerun just that is a godsend for getting reports "just right", and the "show your work" and visual aspect make them much more consumable and trusted by other people.
Jupiter is excellent at what it is designed for. For example, my usual workflow is as follows: when I develop a tool or model, I do so in a plain Python file. I then import the file from the notebook to create figures, demonstrations, documentation and so on, resulting in an immediate document that my colleagues and I can easily use for discussion. It's as simple and effective as that. It is also a great tool for teaching coding to beginners. Of course, notebooks are not designed for code development. Also, nowadays, if you want to, you can open notebooks in dedicated apps.
Any reason your colleagues can't run the original Python script? It seems like your workflow entails going back and forth between script and notebook or having to make changes in two places to keep both versions in sync.
Then there is also the issue that notebooks are too complex to version-control effectively.
Notebooks are closer to using a repl than a script.
One key feature is that you can run a long data-prep/processing step and then iterate on whatever comes after without having to re-run the compute intensive steps to get the data (i.e.) you want to graph
Another key feature is learning and sharing knowledge. Images, markdown, graphs, links, code... all interwoven. Scripts do not have affordances for these things. In this sense, Notebooks can be closer to reproducible blog posts.
notebook are great when you wanna see the intermediate results and not just the final result (in case last step takes time and you wanna double check the data) or when you just wanna better understand what the code does (yes, you can set up a debugger and debug a script etc but that is just more pain)
I have no real qualms with the idea of a notebook, as long as it's not adding a lot of custom magic. I should be able to share what I'm working on, iterate in a notebook with someone, then extract it into a standalone program without much thought.
One issue is that very often the "magic" happens in imported modules, so you can't really see what is happening unless you drop down to a text editor anyway. Then there is the infamous issue of modules not automatically reloading even when rerunning the Notebook.
That in theory shouldn't be too bad. Though, there have been some things like %sql I remember using in one notebook, which was essentially a macro for making a datatable from a SQL expression, but it wasn't something you could just copy directly.
I do think notebooks are very flawed right now even if the concept is sound. Right now they are essentially an IDE without the ability to produce publishable output (you are literally expected to ship the project in the form the IDE works with) and they are not a very good IDE at that.
They need reliable dedicated published output fit for general public consumption. This means a static (no backend host required) sharable .html file where end users can view all the data and run the code samples that doesn't try to present as an IDE. I actually wrote https://rubberduckmaths.com/eulers_theorem in Jupyter but had to manually copy and paste to a new well formatted static html file and re-paste the code blocks into Pyodide enabled text areas within that html since the export functionality is a mess. The result of the manual work means i now have an easily sharable and easily hosted static html file with working Python code samples as it should be but... Why don't Notebooks have a published form like this already? It seems pretty obvious that Notebooks are your IDE, they shouldn't be the output you present. We're literally asking users today 'to view this notebook install Jupyter/Marimo/whatever and open from there' when the Notebook is designed to create the publication rather than a place to view it. In general the output i demonstrate above should be the minimum bar that Notebook 'export' features should hit. Export as a static .html file with working code. As someone who manually 'compiles' notebooks it's not hard to do yet Notebooks simply don't have an actual working html export right now (i know there's technically a 'html' export option in Jupyter but it will strip out your code and create a terribly poorly formatted document as output).
The IDE aspects themselves, at least for Jupyter (the one I've tried out the most), are a bit too simple too. Yes it's nice to have alternating 'text' blocks followed by 'code' blocks but that's really all they are right not. I want something more complex. I want the code blocks shown to actually be windows to a full python project. Users should be able to change the code shown and view the larger, well structured Python code. Right now it's text, followed by simple code. Not much more honestly. As it is right now i feel Notebooks only work for really simple projects.
If you have a complex Python project i agree the way to only share it is to share the Python project as is. Notebooks could be a wonderful explanatory wrapper to a larger project but right now they just aren't good at doing much more than the simple 'here's some data' followed by 'here's the code i used to process that data' and they don't even present that particularly well.
notebooks are great for sharing code with people who might have only a cursory knowledge of coding. It also helps highlight specific sections of code. Graphs and other outputs immediately proceeding the block of code that generates them can be really helpful as well
If a "script" gets too long and complicated, and it's a project I intend to present to others, I often reach for a notebook to organize the code in a more digestible format
What’s these folks relationship to Jupyter? I guess they must be some of the really prominent Jupyter developers? Otherwise declaring their system the successor to such a widely used tool seems pretty presumptuous.
Indeed regular Jupyter works so well on VS Code for solo work these days that there is no real need for a new entrant.
So what pain point are these new entrants trying to solve?
Sure there is an issue of .ipynb basically being a gnarly json ill suited for git but it is rare that I need to track down a particular git commit. Even then that json is not that hard to read.
Also I'd like an easier way to copy cells across different Jupyter notebooks, but at the end of day it is just Python and markdown not very hard to grok.
Might wanna spell-check the post if you want any credibility on your claim:
> worfklows
Observable is already open-sourced and well-respected. Bold and ridiculous to claim your random product is "the successor" of a well-known project, without any obvious relationship to the founders/maintainers of the thing you claim to be aping.
A lot of the comments on here are too pessimistic. Deepnote has had the single best jupyter interface for years now - unfortunately locked behind a cloud subscription though. Jupyter itself has been stagnant for far too long, and it's much appreciated there's more options coming online that have a modern level of polish.
Marimo is great, but it's good to have competition in the space (especially when both projects are still owned and maintained by VC backed companies).
It's telling that Wolfram / Mathemetica doesn't even come up in a blog post like this, as the inventors of "the notebook". Jupyter took the concept to a whole new level, but the concept did originate in Mathematica 30 years ago!
The concept of literate programming, text interspersed with code, is older. Knuth mostly invented it, writing TeX, among other things, in it. Org mode even let you evaluate code blocks and store the output, or use it in future blocks.
I love notebooks, I use them all the time for teaching, writing (all of my books are written in notebooks), EDA, model development, and more. I've spoken at Jupytercon.
Having said that, I've never played around with other notebook implementations (ok, I've used IPython Notebook, Jupyter Notebook and Lab, Google Colab, ein (emacs), Jupyter in Vscode, and Notebook (.py) files in Vscode).
I've seen Joel's rant about notebooks, and they do have drawbacks.
But I would rather push better programming practices (chaining pandas, using functions, rearranging cells) than have dependent cells written in the horrible piecemeal style that I see all around the industry.
My biggest issue with notebooks is JSON. I've used Jupyter to get around it for years, and now many LLMs are decent at writing Jupyter JSON.
You claim it’s a successor, but is everyone really on board with that? I love jupyter, but generally feel like having to run a server is the downside.
The nice thing about jupyter notebooks is that you can run them inside vscode without an explicit server, but I like to just use %% so that I can run it in zed and vs code and it’s just a python file that doesn’t need conversion.
Haha I've asked LLMs to avoid fluff in a prompt and gotten exactly a heading like that one wkth "(no fluff)" before.
Otherwise I've followed DeepNote since they started. I agree with other comments that it's icky to announce yourself as a successor to someone else's project, but always nice to have more options for open source
I'm not involved in any capacity with the development or use of Jupyter—I think ipynb is fundamentally flawed at a deep level, starting with its (I)Python roots—but this company's framing of their product as "the successor to Jupyter notebook" comes across as passive aggressive at best and misleading at worst. What is their relationship to Jupyter besides building a Jupyter alternative?
Didn't expect to see this trending here! We worked hard to execute on our vision of a data notebook and I'm glad we finally got a chance to open source it. We stand on the shoulders of giants. AMA!
Way to undermine an interesting product launch through poorly chosen language:
> Let’s be frank the single‑player notebook has felt outdated for a while now. We’re open‑sourcing its successor. Jupyter belongs in the hall of great ideas — alongside “Hello, world.” and “View Source.”
If you're trying to reach out to the Python community this is not the way to do it. Completely unnecessary hostile language there! Have some respect.
My advice to Deepnote is to scrap this launch announcement (ideally with an apology) and try again. They've built something genuinely useful and interesting but it's going go get a lot less attention than it deserves if they introduce the open source version to the world like this.
The whole post feels like it was edited/modified by ChatGPT; `What we opened — in English, not a changelog`, `Why it matters (no fluff):`, `We are big believers in notebooks — full stop` are patterns that always make me feel like an LLM wrote it (sentence followed by a marketing qualifier).
I really liked Deepnote the product when I last used it, but the post definitely feels off.
I don't think an LLM wrote it; this has been their brand voice for a long time...
> an interesting product launch
This looks less like an "interesting" product, and more like a case of pivoting a commercial product that isn't making enough money into an open source one in the hope of at least gaining some credibility from all that work as well as undercutting the competition.
> They've built something genuinely useful and interesting
This core product has been around for years now. If it was that interesting and useful, more people would have likely paid for the original offering.
I would certainly recommend taking this 'release' with a touch more cynicism.
I saw this on LinkedIn earlier and literally closed it after “ Let’s be frank the single‑player notebook has felt outdated for a while now”
I think it must be messaging for “leadership” as opposed to practitioners, there are lots of real pain points but they don’t seem to be mentioning them
Completely agree. I don't know anyone who isn't financially incentivized to see the Jupyter project fail who feels this way about Jupyter. This whole post stinks of "we're losing to Jupyter so let's throw up a ridiculous Hail Mary".
Simple explanation, they used AI as a voice for their writing instead of using it as a tool for writing in their own voice.
LLMs are good to proofread, check your tone, generate ideas, etc.
Letting them take over your connection with an audience or be a substitute for gut checks or taste is not helping anyone.
Seeing such pivotal announcement be poorly vetted slop doesn't really inspire confidence in the quality of their product.
When we stand on the shoulders of giants, we don’t do so to dump on them.
I do not often make a point after upvoting but instead of writing more or less the same: this ^. If not for the open source, I would have closed the the page after that blurb thinking something is off and I do not need it.
TIL "Hello world!" has been put out to pasture.
Who needs Hello World when you can have an LLM implement an entire number guessing game for you?
For when the LLM completely screws up the code as it does
I don't know if I'm missing something here, but I think they rephrased the article, or at least the quoted sentence is not there anymore.
Edit: I checked with wayback machine and they definitely modified the article.
Why is "View source" listed here as if it's some outdated feature of the past?
Probably an AI wrote large parts of this press release.
That’s no excuse. Someone shipped it (and ostensibly read it).
100%, you can feel this is GPT5 style.
At least they did a s/—/-/ in the copy
it is since Javascript compilers and minification became a thing
The whole article felt very dishonest and frankly quite rude towards Jupyter. Self-declaring themselves to be the successor to a project that's still alive and that they seemingly have absolutely no legitimate claim to, and then going on to bash it by saying it's dying because job postings are decreasing and commits are decreasing, the latter point is especially dishonest considering Jupyter is already quite complete and fully featured so maybe it doesn't need constant daily commits?
Maybe they should focus less on bashing Jupyter and more of showing what's good about them, for example they stated multiple times that Jupyter is messy JSON but they never showed off their own format... Just some vague hand-wavy "perfect for AI!"
>If you're trying to reach out to the Python community this is not the way to do it. Completely unnecessary hostile language there! Have some respect.
The note sounds as written by some manager/marketing guy that has 20 years to touch a line of code...
For sure it put my off even checking what their shit is (from initially interested upon seeing the HN post).
it’s what happens when people use LLMs to write their posts. Rubbishing Jupyter is an obvious choice if you’re a machine writing a compelling post. Rubbishing Jupyter if you’re a human being with a stake in the space is a terrible choice.
My constructive advice for deepnote: if you don’t have something to say from the heart, don’t ask an LLM to generate something for you. Write less, not more. For a post this important, an LLM is a terrible choice.
It reads like a sales call where they’re getting customer pushback and responding with something quantitative - not a good opener at all, especially for this audience. The GPT tone pushes it over the edge.
What they built looks great and I don’t disagree with their take in substance, but you get one chance to make your open source announcement good - don’t blow it like this.
Yes. This wording just misses the mark and sounds super tone deaf
Not sure that an apology is necessary though. Some overconfident marketing person tried something, and it failed. That's what happens if you try stuff. They should just try harder next time
Thanks for the feedback Simon!
Did you... read it?
Apparently they're continuing the tone-deaf announcement with tone-deaf responses to feedback.
“Just post through it” is a tried and true tactic!
They did edit the post..
I don't know much about this, but I understand Project Jupyter is Nonprofit. If I go to "jupyter.org" I see a tab "Community" and another "Governance". If I go to "deepnote.com" I see "Customers" and "Pricing".
Why would people want a standard to be controlled by a private company? I don't think the "Open-Sourcing" of it says enough. How does licensing work with formats or standards?
People don't want that. This article is largely empty marketing. Claiming they have "the successor" is all you need to read before you can infer it's hot air.
All standards are ultimately controlled by private companies. Even non-profits require funding.
Open source always depended on a viable business model (of one or many companies) that can sustain not just the release, but also an ongoing maintenance of the standard.
Interestingly, even the Warez Scene has standards, and no commercial backing. They're enforced, too.
To see the actual standards, you can search for "standard" on https://defacto2.net/search/file
There's a free book that covers that topic:
https://punctumbooks.com/titles/warez-the-infrastructure-and...
Which private companies control Jupyter?
The problem with corporate control isn't that they require funding or they are private, the problem is they are motivated first by profit. Sometimes exclusively. So when "what's best" is at odds with "what's profitable", they tend to make the wrong choice.
Take this project for instance. If one day their choice is to forgo all future profits, or to close the source to continue operating, it's very likely they will close the source to continue operating, rather than forgoing profits. We've seen it happen enough to be wary from the project structure alone.
I’m not familiar with Deepnote, but I have quite a lot of experience with Jupyter, and if someone were to ask me if there are more modern alternatives I would immediately point them to marimo (https://marimo.io/). For me marimo is already a successor to Jupyter, it has replaced it entirely for me.
But doesn't marimo force certain workflows that jupyter does not? For example its website states that "Notebooks are executed in a deterministic order, with no hidden state — delete a cell and marimo deletes its variables while updating affected cells." This appeals to people doing traditional software development work in notebooks, but it breaks workflows where people use notebooks as notebooks, where state is entirely separate from the in-notebook presentation of cells. Do people using notebooks these days hate this fundamental feature of notebooks? It's the key reason why notebooks aren't just a transcript of a REPL session!
> Do people using notebooks these days hate this fundamental feature of notebooks?
It's one of the things that is the most confusing to people I've worked with. The idea that there is hidden state and you have to re-run cells to get variables to update (or variables still exist when you've deleted those cells) is quite confusing.
If you're trying to have a reproducible workflow, it can be difficult. Jupyter is no different from other notebooks in this regard (RStudio, for example will happily run code and keep variables around that you don't reference any longer in your .R or .Rmd files.)
But I see your point -- if you're using it as a long-term storage notebook, then this is the expected behavior. And you absolutely want to have "historical" data/results kept.
I generally think of / use notebooks as a way to make reports for analyses. So, I want to work with them, draft them, change them, put them in git, etc... then run them all at once to get my output. For me, having a reproducible, documented workflow is more important. I don't want state to be kept outside of those one-off runs. Really until your comment, I didn't understand the other side of the issue, so thanks!
Anyone trying to reproduce the results in a shared notebook certainly hates that "feature"
Not at all. Its just a development choice. Personally Id stick with Jupyter because state is maintained.
Yup, Marimo seems perfectly gittable and Deepnote looks more of the status quo.
> Human-readable format: The .deepnote YAML format replaces .ipynb's messy JSON with clean, version-control and human-friendly structure for projects and notebooks. You can organize multiple notebooks, integrations, and settings into a single .deepnote project for better structure and collaboration.
https://marimo.io/blog/python-not-json
I confess that picking yaml doesn't feel safe to me. As much as it annoys people, fenced formats are the way, here. And if you aren't careful, you are going to recreate XML. Probably poorly.
significant whitespace is just problematic in all spaces.
Marimo is much better for git than Jupyter. I only wish it had a gittable/reviewable version with output, too.
I actually mostly use Jupyter for non-Python code (e.g. Julia or Ruby). How is Marimo's support for other languages?
Non-existent AFAICT, files saved as ‘.py’. I use Jupyter primarily with F#, the multi-language support is huge.
Will github render a Marimo notebook as if it has already been executed, like they do for Jupyter notebooks?
Marimo has been acquired (last week), does the SaaS enshitification pattern give you any pause?
https://marimo.io/blog/joining-coreweave
This is Akshay, the original creator of marimo. Our whole team has come over to CoreWeave. We're building a whole lot more, not less, and our number one priority continues to be the open-source. We're also growing the open-source team, i.e. we're hiring.
Thanks for the link. I'm not a subscriber to their blog and would otherwise not have known about this change that affects the recommendations I have been giving people.
Have to say marimo is excellent and is a breath of fresh air compared to Jupyter!
What does it do better? I'm happy with Jupyter for most of my cases but never hurts to look around.
One thing I like about marimo is the autocompletion's much faster than Jupyter.
How hard would it be to add creatine collaboration to Marimo?
Pretty easy. Just a scoop of whey.
x2 to marimo, please try it if you haven't.
I never really liked Jupyter. I've built a couple of "notebooks", as they have come to be known the days, much better than Jupyter and still switched to marimo, eventually. Quite similar to what I wanted to have.
> Teams need notebooks that are reactive, collaborative, and AI‑ready
reactive: this matters, but all the alternatives have it
collaborative: this matters very little in the Figma / Google Docs sense of collaborative in practice. It's very rare you want two people working on the same notebook at the same time. What you really want is git style version control.
AI‑ready: you want something as close to plain python (which is already as AI-ready as it gets) as possible.
if you're measuring across these dimensions, I'd go with marimo.
marimo is saved as plain .py files, easy to version control and has a reactive model.
I'd argue the opposite.
There is plenty of AI extensions, but the experience matters. The depth of integration matters. When you execute queries against production warehouses and you make decisions based on the results of AI-generated code, accuracy matters. We had our first demo of an AI agent running in 2 days, it took us another 2 years to build the infrastructure to test it, monitor it, and integrate it into the existing data source.
You'd be surprised how many people collaborate together. Software engineering is solitary, collaboration happens in GitHub. But data analysis is collaborative. We frequently have 300+ people looking at the same notebook at the same time.
.py never worked for data exploration. You need to mix code, text, charts, interactive elements. And then you need to add metadata: comments, references to integrations, auth secrets. There are notebooks that are several pages long with 0 code. We are building a computational medium of the future and that goes beyond a plaintext file, no matter how much we love the simplicity of a plaintext file.
seems you completely missed the point. marimo does everything you're looking for in plain .py files that render as notebooks.
https://marimo.io/blog/python-not-json
We seriously considered this, but decided against this. While elegant for demo projects, it doesn't scale for serious deployments. You still need to deal with secrets, metadata (lots of it), backwards-compatibility, and extensibility (we have 23 block types today, many more to come).
This is the first time I'm hearing about marimo and i have to say their landing page is excellent! Immediately makes me want to try it
Claiming that the number of job postings mentioning Jupyter has decreased, so Jupyter is no longer popular is not something a company in the data space should do. It is just embarrassing.
And that graph they show has an offset y axis (hides the scale from 0 to exaggerate the "downward trend") _and_ has a non-uniform x axis. Each tick mark represents a different scale of time (1yr, 3 mos, 1 mo)
Wtf
Oh wow did not catch that!
Wouldn't the successor for Jupyter be decided by adoption? For a single team to self declare this seems a bit crass, no?
I would recommend not making any career pivots to sales or marketing.
Marketing is audience-dependent.
If you are marketing to VCs then disparaging competitors and making grandiose claims can be effective.
But when marketing to developers then constructive criticism and humility may be more effective.
Not sure if this is sarcasm, but GP has good taste and when marketing this stuff to developers, you need good taste. Otherwise it just rings hollow, and doesn’t inspire enthusiasm.
>Meanwhile, the market is voting with its feet. Across the Fortune 1000, job postings that mention and require Jupyter knowledge are down sharply; the most recent month was deep in the red YTD.
This is a joke, right?
Framing of this seems a bit nasty tbh - jupyter deserves a little bit of respect on its name!
I agree with this. I have no problem with a technical discussion comparing the features and discussing Jupyter’s shortcomings. But the Jupyter job postings and contribution graph screenshots felt both ill-spirited and not particularly relevant.
There was interesting stuff here: the human readable format, auto publication. But the tone and framing bashing a reliable open source project really turned me off.
Yes, agree. To the point that I'm not very interested in looking them up.
It isn't even a successor. A successor would be open source from the start!
Title and first paragraph make it sound like this is a project by the same people as (or endorsed by them) Jupyter. Apparently that's not the case and also it looks very similar to google colab so jupyter + better UI + some LLM integrations
But kudos for going oss
The hubris of this self declared successor. It’s not even the same team.
I'm confused. I checked out the repo and I don't think the notebook itself - the equivalent of Jupyter is open source yet:
https://github.com/deepnote/deepnote/
What's the equivalent of `jupyerlab run`?
> You'll soon be able to:
> Take the UI you're used to from Deepnote Cloud and run it locally > Edit notebooks with a local AI agent > Bring your own keys for AI services > Run your own compute
Do people find this sort of writing appealing?
For me, it's cringe to borderline painful to read.
yes!
this announcement had such strong gpt-output vibe..
to the "writers": pleeease don't present unedited slop to me. i'm a human, if you want my attention consider using your own voice. i don't want to read what gpt thought would "market" you best.
but this thread did remind me of marimo so that's sweet
Are there any other people who hate notebooks? Give a plain old script anytime. Run and edit anywhere without extra packages or even a Web browser.
Notebooks aren't competing with scripts, they're competing with REPLs.
ML and scientific applications in particular tend to have segments that run for a long time, but then you'd like the resulting script to be in a state where you can mess with it, maybe display some multimedia output, etc, without re-running the long-running segment. Notebooks fit this need to a tee.
I've found that notebooks are great for ad hoc reporting and analysis scripts. Once you have your quick and dirty script, it is trivial to convert to a notebook and you get a lot for little. Being able to change one cell and rerun just that is a godsend for getting reports "just right", and the "show your work" and visual aspect make them much more consumable and trusted by other people.
Jupiter is excellent at what it is designed for. For example, my usual workflow is as follows: when I develop a tool or model, I do so in a plain Python file. I then import the file from the notebook to create figures, demonstrations, documentation and so on, resulting in an immediate document that my colleagues and I can easily use for discussion. It's as simple and effective as that. It is also a great tool for teaching coding to beginners. Of course, notebooks are not designed for code development. Also, nowadays, if you want to, you can open notebooks in dedicated apps.
Any reason your colleagues can't run the original Python script? It seems like your workflow entails going back and forth between script and notebook or having to make changes in two places to keep both versions in sync.
Then there is also the issue that notebooks are too complex to version-control effectively.
>make changes in two places to keep both versions in sync.
That's not how Python imports work. You can import in the notebook just like any other Python script.
I'm more amused by them than anything. They're mostly just a reinvention of literate programming, via web or org mode or whatever
Notebooks are closer to using a repl than a script.
One key feature is that you can run a long data-prep/processing step and then iterate on whatever comes after without having to re-run the compute intensive steps to get the data (i.e.) you want to graph
Another key feature is learning and sharing knowledge. Images, markdown, graphs, links, code... all interwoven. Scripts do not have affordances for these things. In this sense, Notebooks can be closer to reproducible blog posts.
notebook are great when you wanna see the intermediate results and not just the final result (in case last step takes time and you wanna double check the data) or when you just wanna better understand what the code does (yes, you can set up a debugger and debug a script etc but that is just more pain)
I have no real qualms with the idea of a notebook, as long as it's not adding a lot of custom magic. I should be able to share what I'm working on, iterate in a notebook with someone, then extract it into a standalone program without much thought.
One issue is that very often the "magic" happens in imported modules, so you can't really see what is happening unless you drop down to a text editor anyway. Then there is the infamous issue of modules not automatically reloading even when rerunning the Notebook.
That in theory shouldn't be too bad. Though, there have been some things like %sql I remember using in one notebook, which was essentially a macro for making a datatable from a SQL expression, but it wasn't something you could just copy directly.
I do think notebooks are very flawed right now even if the concept is sound. Right now they are essentially an IDE without the ability to produce publishable output (you are literally expected to ship the project in the form the IDE works with) and they are not a very good IDE at that.
They need reliable dedicated published output fit for general public consumption. This means a static (no backend host required) sharable .html file where end users can view all the data and run the code samples that doesn't try to present as an IDE. I actually wrote https://rubberduckmaths.com/eulers_theorem in Jupyter but had to manually copy and paste to a new well formatted static html file and re-paste the code blocks into Pyodide enabled text areas within that html since the export functionality is a mess. The result of the manual work means i now have an easily sharable and easily hosted static html file with working Python code samples as it should be but... Why don't Notebooks have a published form like this already? It seems pretty obvious that Notebooks are your IDE, they shouldn't be the output you present. We're literally asking users today 'to view this notebook install Jupyter/Marimo/whatever and open from there' when the Notebook is designed to create the publication rather than a place to view it. In general the output i demonstrate above should be the minimum bar that Notebook 'export' features should hit. Export as a static .html file with working code. As someone who manually 'compiles' notebooks it's not hard to do yet Notebooks simply don't have an actual working html export right now (i know there's technically a 'html' export option in Jupyter but it will strip out your code and create a terribly poorly formatted document as output).
The IDE aspects themselves, at least for Jupyter (the one I've tried out the most), are a bit too simple too. Yes it's nice to have alternating 'text' blocks followed by 'code' blocks but that's really all they are right not. I want something more complex. I want the code blocks shown to actually be windows to a full python project. Users should be able to change the code shown and view the larger, well structured Python code. Right now it's text, followed by simple code. Not much more honestly. As it is right now i feel Notebooks only work for really simple projects.
If you have a complex Python project i agree the way to only share it is to share the Python project as is. Notebooks could be a wonderful explanatory wrapper to a larger project but right now they just aren't good at doing much more than the simple 'here's some data' followed by 'here's the code i used to process that data' and they don't even present that particularly well.
*anywhere that has the python runtime installed
ironically that makes jupyter more portable via colab
notebooks are great for sharing code with people who might have only a cursory knowledge of coding. It also helps highlight specific sections of code. Graphs and other outputs immediately proceeding the block of code that generates them can be really helpful as well
If a "script" gets too long and complicated, and it's a project I intend to present to others, I often reach for a notebook to organize the code in a more digestible format
Can someone clarify how deepnote has the authority to declare the "successor" of Jupyter?
What’s these folks relationship to Jupyter? I guess they must be some of the really prominent Jupyter developers? Otherwise declaring their system the successor to such a widely used tool seems pretty presumptuous.
Just people working at a moderately successful startup, frustrated that they've created a niche product.
Very few objections to Jupyter that are not addressed by `nbconvert`.
Jupyter does not have (or need) a successor.
Indeed regular Jupyter works so well on VS Code for solo work these days that there is no real need for a new entrant.
So what pain point are these new entrants trying to solve?
Sure there is an issue of .ipynb basically being a gnarly json ill suited for git but it is rare that I need to track down a particular git commit. Even then that json is not that hard to read.
Also I'd like an easier way to copy cells across different Jupyter notebooks, but at the end of day it is just Python and markdown not very hard to grok.
The framing of this title makes it seem like Jupyter is dead. It, in fact, is not.
Oh, and it's Apache 2 licensed, so actually open source and not just pretending for the cred.
Sincerely, nice.
Might wanna spell-check the post if you want any credibility on your claim:
> worfklows
Observable is already open-sourced and well-respected. Bold and ridiculous to claim your random product is "the successor" of a well-known project, without any obvious relationship to the founders/maintainers of the thing you claim to be aping.
What's wrong with `workflows` - out of curiosity?
Also, AFAIK, Observable is only JS - this is a Python notebook solution that we are talking here.
I'm just an observer - their claim of being successor to Jupyter is definitely hyperbole.
There is an extra F "worFkflows". I don't mind it, but they should fix it anyway.
A lot of the comments on here are too pessimistic. Deepnote has had the single best jupyter interface for years now - unfortunately locked behind a cloud subscription though. Jupyter itself has been stagnant for far too long, and it's much appreciated there's more options coming online that have a modern level of polish.
Marimo is great, but it's good to have competition in the space (especially when both projects are still owned and maintained by VC backed companies).
Which one is better, marimo or deepnote?
It's telling that Wolfram / Mathemetica doesn't even come up in a blog post like this, as the inventors of "the notebook". Jupyter took the concept to a whole new level, but the concept did originate in Mathematica 30 years ago!
The concept of literate programming, text interspersed with code, is older. Knuth mostly invented it, writing TeX, among other things, in it. Org mode even let you evaluate code blocks and store the output, or use it in future blocks.
I love how opensourcing it basically in today's world is a precursor to we're sunsetting it.
Lot's of notebooks floating around.
I love notebooks, I use them all the time for teaching, writing (all of my books are written in notebooks), EDA, model development, and more. I've spoken at Jupytercon.
Having said that, I've never played around with other notebook implementations (ok, I've used IPython Notebook, Jupyter Notebook and Lab, Google Colab, ein (emacs), Jupyter in Vscode, and Notebook (.py) files in Vscode).
I've seen Joel's rant about notebooks, and they do have drawbacks.
But I would rather push better programming practices (chaining pandas, using functions, rearranging cells) than have dependent cells written in the horrible piecemeal style that I see all around the industry.
My biggest issue with notebooks is JSON. I've used Jupyter to get around it for years, and now many LLMs are decent at writing Jupyter JSON.
You claim it’s a successor, but is everyone really on board with that? I love jupyter, but generally feel like having to run a server is the downside.
The nice thing about jupyter notebooks is that you can run them inside vscode without an explicit server, but I like to just use %% so that I can run it in zed and vs code and it’s just a python file that doesn’t need conversion.
Haha I've asked LLMs to avoid fluff in a prompt and gotten exactly a heading like that one wkth "(no fluff)" before.
Otherwise I've followed DeepNote since they started. I agree with other comments that it's icky to announce yourself as a successor to someone else's project, but always nice to have more options for open source
I'm not involved in any capacity with the development or use of Jupyter—I think ipynb is fundamentally flawed at a deep level, starting with its (I)Python roots—but this company's framing of their product as "the successor to Jupyter notebook" comes across as passive aggressive at best and misleading at worst. What is their relationship to Jupyter besides building a Jupyter alternative?
What are some of the flaws surrounding IPython and ipynb in particular?
Hasn't LiveBook had multiplayer editing for a few years now?
I'm pretty confused why a company would waste such an important announcement / milestone with a clearly llm-generated blog post.
If the job postings for Jupyter go down, what exactly will happen to an "AI"-first replacement when "AI" weariness is rising sharply?
"AI" has achieved what seemed impossible in 2019: It makes people hate all tech and gets them away from computers.
This is an ad.
The successor to Jupyter notebook is Marimo, https://marimo.io/ because they are pure code, not code in json. First class everywhere.
How is this a “successor”? It’s not tied to the Jupyter project in any way? Looks like a scummy ad for some subpar aislop product?
Jupyter use is declining because coding agents got really good. Multiplayer mode is not going to save it.
as me and my co-worker used to joke, "marimo is the mclaren of notebooks".
standing up Bingo!
(Anybody still familiar with "bullshit bingo"?)
The article reads as if generated by AI. Lol
Instead of contributing to Jupyter, we will create another tools
I'm Jakub, CEO of Deepnote.
Didn't expect to see this trending here! We worked hard to execute on our vision of a data notebook and I'm glad we finally got a chance to open source it. We stand on the shoulders of giants. AMA!
Which LLM was used to generate that post?
And do you have a marketing team to fire, or is it just the LLM?