>This is a first. I never lost access to any of my past sessions because I unsubscribed in any of the LLM apps.
It's not entirely unprecedented - seen these tactics in the google ecosystem. Google music. Unsubscribe killed(kills?) access to see you playlists which of course you only learn once it's done. Give them a credit card again and you can see and export them again. Magic!
Resubscribed for 1 month, exported it, unsubscribed, and swore to never trust google music again. idk why they implement patterns like that because sure you extorted $10 in cash out of me but it makes the brand toxic
If you export your data [0] all your Claude Design chats are in a design_chats directory along with the code, even if your account currently has no access to Claude Design. It is .json, but converting that into usable code is easily done, either manually or by asking any fairly modern LLM via OpenCode. Just did it myself, it works.
Not feeling like commenting on every statements regarding SaaS and expectations, but I will say that some are mistaken/not considering the whole picture. If the data is there (which it must be, otherwise any temporary subscription cancellation would mean permanent data loss which if payment providers, credit cards, etc. are involved can happen quite frequently even on accident/without the user intending to cancel), users in consumer friendly legislations have the right to export and access that data. Doesn't matter whether they pay or not. Course, manual backups are always preferable, a provider could still have a data loss after all, but as long as they have it, at least in my neck of the woods they have to give it to you. As it should be.
A lot of these things are made fast and loose, and unfortunately this is the reality of using the bleeding edge. Even Figma went through this kind of thing very early on.
To add something else to the discussion however, I'd encourage people to skip out on Claude Design for other reasons, and that is the inherent restrictions of LLMs for visual design. LLMs are blind, and spatial relativity is tremendously hard across layers of nested html / css.
If you're early on, I'd recommend starting with diffusion first. GPT-Image-2 is phenominal at UI design, and especially if you're just starting out will let you align on a direction more rapidly than an LLM can. The difficulty will be converting from image->html, but you'll be able to explore different directions more cheaply/faster than you could with Claude Design.
I will note a bias disclaimer here - I quit Figma to work on my own diffusion-based UI design tool. Not promoting that here, but wanted to at least share my findings in this space.
Thank you so much for your suggestion regarding UI design. As my main expertise is not this, I need some tool to depend on to ground my projects somehow. Even though stitch by google and claude design are not perfect, they give me some starting point. And then, after building the actual working project, will iterate until I like the look of it. This is how I'm using these right now. I can't even itearte on these design LLM's now, their own UX is very clunky and not very friendly, or its made more for the design folks.
But I will give GPT-Image-2 a try. Actually few months back I remember doing this UX/UI research on the chat gpt app itself, just asking it to generate what a certain app might look like and etc.
Please let me know your UI design tool. I'm want to try it out.
Yeah, I'm starting to be worried about Anthropic's security controls for customer information.
To say they'd have a firehose of sensitive info from customers would be a massive understatement. Hackers gaining access to that, especially for a non-trivial duration, would be a disaster.
This has not been my experience. Claude artifacts at first, then Claude Design after it was released, are excellent at design! The way I can steer the model updating the design with different ideas and visions, even adopting different design systems like Material 3 or Apple’s HIG it has been phenomenal.
What do you mean LLMs are blind? All frontier models are multimodal, which means they literally consume images as tokens. They can “see” exactly as well as they can “read”.
Also, GPT-Image-2 is not a diffusion model, it is based on Transformers, like other LLMs are.
Or just use Google's Stitch, it integrates both code via Gemini and image UI generation via Nano Banana which I'd argue is even better than OpenAI's image models.
I have been using Claude Design + Claude Code, and results have been excellent. I have explicit clean-up instructions in Claude Code, and the handoff skill in Claude Design is pretty solid.
I've been on product launches many times, so can drive the design side appropriately and keep things focused. Has been a wonderful addition to my workflow.
As usual with any agent-driven tool - GIGO. If the human driving has no product experience and is blindly accepting designs, well, that's... a choice.
tbh, backups matter. but nobody would accept Word deleting your files when you cancel Office. somewhere along the way we stopped distinguishing backup from custody.
I also encountered an issue with my credits. I was previously subscribed to the max plan, claimed credits, then downgraded to the pro plan and noticed I lost my credits. I didn't unsubscribe, just downgraded plans as I wasn't using claude enough to justify needing max.
The lovely irony of a bleeding edge AI company being afflicted by the most mundane problem of all software engineering—the goldfish attention span of human coders.
When you lose access to your projects, does Anthropic acquire the intellectual property? It's a real issue when it's in a machine learning system, not passive storage like Github.
Aside from OP's post there's another issue with claude design worth mentioning. Yes, it makes absolutely beautiful designs, stunningly so, but the actual code is not something a human could ever maintain. So its like ending up with an opaque blob. Write-once, read-never, or almost disposal code. This is kind of bad because code people aren't going to bother to read might contain vulnerabilities.
It's an extreme example of slop code since while normally LLMs can produce code that ranges from some-what-okay to utter garbage, the web code claude makes is awful. On the other hand: you get a single file (even if it is full of 20+ embedded SVGs, javascripts, and other such things.)
Have you actually gotten it to build stunning designs? From what I’ve seen it still falls apart very quickly. They can do a decent job at building blocks but usually not putting them together in a cohesive way in my experience.
So.. you unsubscribed from a SaaS and expected them not to purge your data? Why would that make sense?
Anthropic may be a bunch of skids but it sounds like they did the right thing here. Pretty much all SaaS applications, especially in B2B, are required by compliance to remove customer data within X amount of time at the end of the contractual relationship.
The standard across almost all services is to retain easy-to-retain data when someone leaves. It's just good business: you WANT them to come back.
The only example I can think of are the TV services: Netflix will erase your watched show list if you unsubscribe. But they are very purposefully doing it out of spite: they want to push you towards not unsubscribing at all (so they penalize it even at the cost of discouraging you from coming back ... because they know "subscription hopping" is a thing, and expect you'll come back anyway).
It's 100% a dick move when the TV services do it, but at least it (kind of) makes business sense for them to do it. For Claude it's just alienating their customers needlessly.
I guess it's not a termination, but a downgrade to the "free" tier. But that still makes sense, given Claude Design is limited "to Pro, Max, Team, and Enterprise plans". He's not on that plan anymore so.. what commercial reason could they possibly have to keep his data?
Google Workspace seems to halt access immediately[1] and purge data within 60d[2]. For comparison, Atlassian leaves you access for 15d, and purges data at 60d[3]. 365 gives you 90d[4] before purging.
This is a pretty regular thing across the industry.
>This is a first. I never lost access to any of my past sessions because I unsubscribed in any of the LLM apps.
It's not entirely unprecedented - seen these tactics in the google ecosystem. Google music. Unsubscribe killed(kills?) access to see you playlists which of course you only learn once it's done. Give them a credit card again and you can see and export them again. Magic!
Resubscribed for 1 month, exported it, unsubscribed, and swore to never trust google music again. idk why they implement patterns like that because sure you extorted $10 in cash out of me but it makes the brand toxic
It is still there and you may get it.
If you export your data [0] all your Claude Design chats are in a design_chats directory along with the code, even if your account currently has no access to Claude Design. It is .json, but converting that into usable code is easily done, either manually or by asking any fairly modern LLM via OpenCode. Just did it myself, it works.
Not feeling like commenting on every statements regarding SaaS and expectations, but I will say that some are mistaken/not considering the whole picture. If the data is there (which it must be, otherwise any temporary subscription cancellation would mean permanent data loss which if payment providers, credit cards, etc. are involved can happen quite frequently even on accident/without the user intending to cancel), users in consumer friendly legislations have the right to export and access that data. Doesn't matter whether they pay or not. Course, manual backups are always preferable, a provider could still have a data loss after all, but as long as they have it, at least in my neck of the woods they have to give it to you. As it should be.
[0] https://claude.ai/settings/data-privacy-controls
A lot of these things are made fast and loose, and unfortunately this is the reality of using the bleeding edge. Even Figma went through this kind of thing very early on.
To add something else to the discussion however, I'd encourage people to skip out on Claude Design for other reasons, and that is the inherent restrictions of LLMs for visual design. LLMs are blind, and spatial relativity is tremendously hard across layers of nested html / css.
If you're early on, I'd recommend starting with diffusion first. GPT-Image-2 is phenominal at UI design, and especially if you're just starting out will let you align on a direction more rapidly than an LLM can. The difficulty will be converting from image->html, but you'll be able to explore different directions more cheaply/faster than you could with Claude Design.
I will note a bias disclaimer here - I quit Figma to work on my own diffusion-based UI design tool. Not promoting that here, but wanted to at least share my findings in this space.
Thank you so much for your suggestion regarding UI design. As my main expertise is not this, I need some tool to depend on to ground my projects somehow. Even though stitch by google and claude design are not perfect, they give me some starting point. And then, after building the actual working project, will iterate until I like the look of it. This is how I'm using these right now. I can't even itearte on these design LLM's now, their own UX is very clunky and not very friendly, or its made more for the design folks.
But I will give GPT-Image-2 a try. Actually few months back I remember doing this UX/UI research on the chat gpt app itself, just asking it to generate what a certain app might look like and etc.
Please let me know your UI design tool. I'm want to try it out.
> A lot of these things are made fast and loose
Yeah, I'm starting to be worried about Anthropic's security controls for customer information.
To say they'd have a firehose of sensitive info from customers would be a massive understatement. Hackers gaining access to that, especially for a non-trivial duration, would be a disaster.
This has not been my experience. Claude artifacts at first, then Claude Design after it was released, are excellent at design! The way I can steer the model updating the design with different ideas and visions, even adopting different design systems like Material 3 or Apple’s HIG it has been phenomenal.
What do you mean LLMs are blind? All frontier models are multimodal, which means they literally consume images as tokens. They can “see” exactly as well as they can “read”.
Also, GPT-Image-2 is not a diffusion model, it is based on Transformers, like other LLMs are.
Tokens are not a substitute for a numerical measurement.
Ask a LLM how much time has passed. Watch it hallucinate wildly.
Has anyone noticed that Opus has trouble building ascii diagrams (often leaves out spaces so lines are misaligned)?
Or just use Google's Stitch, it integrates both code via Gemini and image UI generation via Nano Banana which I'd argue is even better than OpenAI's image models.
Multimodal LLMs are not blind.
Claude design in my experience is very, very solid.
I’ve only used it for fairly basic stuff, things that are very well represented in the training data. But for that it has made me happy.
I have been using Claude Design + Claude Code, and results have been excellent. I have explicit clean-up instructions in Claude Code, and the handoff skill in Claude Design is pretty solid.
I've been on product launches many times, so can drive the design side appropriately and keep things focused. Has been a wonderful addition to my workflow.
As usual with any agent-driven tool - GIGO. If the human driving has no product experience and is blindly accepting designs, well, that's... a choice.
tbh, backups matter. but nobody would accept Word deleting your files when you cancel Office. somewhere along the way we stopped distinguishing backup from custody.
I also encountered an issue with my credits. I was previously subscribed to the max plan, claimed credits, then downgraded to the pro plan and noticed I lost my credits. I didn't unsubscribe, just downgraded plans as I wasn't using claude enough to justify needing max.
It’s pretty outrageous to lock out all your history just for canceling the subscription.
Cannot help but think Claude team are busy adding gimmicky side features instead of doing 'real' RSI and bugfixing.
I don't think the product people and the RSI-adjacent people are the same people.
The lovely irony of a bleeding edge AI company being afflicted by the most mundane problem of all software engineering—the goldfish attention span of human coders.
Backup data that’s important to you.
When you lose access to your projects, does Anthropic acquire the intellectual property? It's a real issue when it's in a machine learning system, not passive storage like Github.
Not your server not your data
And AI hypers suggest to build your whole career/identity on this shit. Already foresee "skill issue", "well you should've x, y, z, obviously", etc.
People often build their career skills on proprietary tech. Photoshop, Figma, Java, AWS architects
Just because people do it does not mean that it is a good idea.
Sorry but that one is on you. This sounds like expected behavior and I wouldn't blame any company for doing that.
Aside from OP's post there's another issue with claude design worth mentioning. Yes, it makes absolutely beautiful designs, stunningly so, but the actual code is not something a human could ever maintain. So its like ending up with an opaque blob. Write-once, read-never, or almost disposal code. This is kind of bad because code people aren't going to bother to read might contain vulnerabilities.
It's an extreme example of slop code since while normally LLMs can produce code that ranges from some-what-okay to utter garbage, the web code claude makes is awful. On the other hand: you get a single file (even if it is full of 20+ embedded SVGs, javascripts, and other such things.)
Have you actually gotten it to build stunning designs? From what I’ve seen it still falls apart very quickly. They can do a decent job at building blocks but usually not putting them together in a cohesive way in my experience.
So.. you unsubscribed from a SaaS and expected them not to purge your data? Why would that make sense?
Anthropic may be a bunch of skids but it sounds like they did the right thing here. Pretty much all SaaS applications, especially in B2B, are required by compliance to remove customer data within X amount of time at the end of the contractual relationship.
The standard across almost all services is to retain easy-to-retain data when someone leaves. It's just good business: you WANT them to come back.
The only example I can think of are the TV services: Netflix will erase your watched show list if you unsubscribe. But they are very purposefully doing it out of spite: they want to push you towards not unsubscribing at all (so they penalize it even at the cost of discouraging you from coming back ... because they know "subscription hopping" is a thing, and expect you'll come back anyway).
It's 100% a dick move when the TV services do it, but at least it (kind of) makes business sense for them to do it. For Claude it's just alienating their customers needlessly.
You get two years of 'free' (readonly) storage if you unsubscribe from google, it's very unusual to just nuke all access immediately.
> are required by compliance to remove customer data within X amount of time at the end of the contractual relationship.
that's a very bullshit justification, we're not talking about the 'delete account' button - especially since claude has a free tier.
I guess it's not a termination, but a downgrade to the "free" tier. But that still makes sense, given Claude Design is limited "to Pro, Max, Team, and Enterprise plans". He's not on that plan anymore so.. what commercial reason could they possibly have to keep his data?
Google Workspace seems to halt access immediately[1] and purge data within 60d[2]. For comparison, Atlassian leaves you access for 15d, and purges data at 60d[3]. 365 gives you 90d[4] before purging.
This is a pretty regular thing across the industry.
[1] https://knowledge.workspace.google.com/admin/billing/cancel-...
[2] https://support.google.com/a/thread/345697828/recovering-dat...
[3] https://support.atlassian.com/security-and-access-policies/d...
[4] https://learn.microsoft.com/en-us/compliance/assurance/assur...