Lovable is marketed to non developers, so their core users wouldn't understand a security flow if it flashed red. A lot of my non dev friends were posting their cool new apps they built on LinkedIn last year [0]. Several were made on lovable. It's not on their users to understand these flaws
The apps all look the same with a different color palette, and makes for an engaging AI post on LinkedIn. Now they are mostly abandoned, waiting for the subscription to expire... and their personal data to get exposed I guess
Developers with decades of experience still make basic security holes. The general public are screwed once they start hosting their own apps and serving on the Internet.
There's something so innocent about the early days when even Microsoft thought we'd be running Personal Web Servers and hosting our own websites in a peer-to-peer fashion.
Although cynically, in 1996 Microsoft would probably tell you anything you wanted to hear if it got you using Internet Explorer.
The Personal Web Server is ideal for intranets, homes, schools, small business workgroups and anyone who wants to set up a personal Web server.
The hardest part about this stuff is that as a user, you don't necessarily know if an app is vibe-coded or not. Previously, you were able to have _some_ reasonable expectation of security in that trained engineers were the ones building these things out, but that's no longer the case.
There's a lot of cool stuff being built, but also as a user, it's a scary time to be trying new things.
Yeah, my trust for new open source projects is in the toilet. Hopefully we will eventually start taking security seriously again after the vibe code gold rush.
Of course there were. Don't be pedantic. Anybody could write a program and put it on the internet. But to get a reasonably polished version with decent features and an enjoyable enough UX for someone to sign up and even pay money more, it generally took people who kind of knew what they were doing.
Of course shortcuts were taken. They always were and always will be. But don't try to compare shipping software today to even just 3 years ago.
Yes - AI has completely destroyed the set of "Signals" people used to judge quality of much software. They weren't ever 100% accurate, sure, but they were often pretty good heuristics for "level of care", what the devs considered important (or didn't consider important) and similar.
And I mean that as both "end user" software signals, and "library" signals for other devs.
I assume that set of signals will slowly be updated. If one of those ends up being "Any Use of AI At All" is still an open question, depending on if the promised hype actually ends up meeting capability as much as anything.
Vibe coding democratized shipping without democratizing the accountability. The 18,000 users absorbed the downside of a risk they didn't know they were taking.
I don't think you know what democracy means, democracy means that users can reject poorly made apps. If you can't reject or destroy something, it's not a democratic process.
Having someone dump shitty wares onto the public is only democracy if you think being held unaccountable as democratic.
I've been thinking a bit about how to do security well with my generated code. I've been using tools that check deps for CVEs, static tools that check for sql injection and similar problems, and baking some security requirements into the specs I hand claude. I can't tell yet if this is better than what I did before or just theater. It seems like in this case you'd need/want to specify some tests around access.
I'm interested to hear how other people approach this.
Ask the LLM to create for you a POC for the vulnerability you have in mind. Last time I did this I had to repeatedly make a promise to the LLM that it was for educational purposes as it assumed this information is "dangerous".
Same way you handle preserving any other property you want to preserve while "vibecoding" -- ensure tests capture it, ensure the tests can't be skipped. It really is this simple.
> One example of this was a malformed authentication function. The AI that vibe-coded the Supabase backend, which uses remote procedure calls, implemented it with flawed access control logic, essentially blocking authenticated users and allowing access to unauthenticated users.
Actually sounds like a typical mistake a human developer would make. Forget a `!` or get confused for a second about whether you want true or false returned, and the logic flips.
The difference is a human is more likely to actually test the output of the change.
I continue to maintain that the best metaphor for the current situation in software development is "The Sorcerers Apprentice" in Fantasia:
https://www.youtube.com/watch?v=m-W8vUXRfxU
Lovable is marketed to non developers, so their core users wouldn't understand a security flow if it flashed red. A lot of my non dev friends were posting their cool new apps they built on LinkedIn last year [0]. Several were made on lovable. It's not on their users to understand these flaws
The apps all look the same with a different color palette, and makes for an engaging AI post on LinkedIn. Now they are mostly abandoned, waiting for the subscription to expire... and their personal data to get exposed I guess
[0]: https://idiallo.com/blog/my-non-programmer-friends-built-app...
Developers with decades of experience still make basic security holes. The general public are screwed once they start hosting their own apps and serving on the Internet.
There's something so innocent about the early days when even Microsoft thought we'd be running Personal Web Servers and hosting our own websites in a peer-to-peer fashion.
Although cynically, in 1996 Microsoft would probably tell you anything you wanted to hear if it got you using Internet Explorer.
The Personal Web Server is ideal for intranets, homes, schools, small business workgroups and anyone who wants to set up a personal Web server.
https://news.microsoft.com/source/1996/10/24/microsoft-annou...
The hardest part about this stuff is that as a user, you don't necessarily know if an app is vibe-coded or not. Previously, you were able to have _some_ reasonable expectation of security in that trained engineers were the ones building these things out, but that's no longer the case.
There's a lot of cool stuff being built, but also as a user, it's a scary time to be trying new things.
Yeah, my trust for new open source projects is in the toilet. Hopefully we will eventually start taking security seriously again after the vibe code gold rush.
> Hopefully we will eventually start taking security seriously again after the vibe code gold rush.
Companies don't take security seriously now (and predating vibe coding)
I'm sorry, what?
> Previously, you were able to have _some_ reasonable expectation of security in that trained engineers were the ones building these things
When was this? What world? Did I skip worldlines? Is this a new Universe?
The world I remember is that anybody could write a program and put it on the Internet. Is this not the world you remember?
Further, when those engineers were "trained" ... were there no data breaches before 2022?
Of course there were. Don't be pedantic. Anybody could write a program and put it on the internet. But to get a reasonably polished version with decent features and an enjoyable enough UX for someone to sign up and even pay money more, it generally took people who kind of knew what they were doing.
Of course shortcuts were taken. They always were and always will be. But don't try to compare shipping software today to even just 3 years ago.
Yes - AI has completely destroyed the set of "Signals" people used to judge quality of much software. They weren't ever 100% accurate, sure, but they were often pretty good heuristics for "level of care", what the devs considered important (or didn't consider important) and similar.
And I mean that as both "end user" software signals, and "library" signals for other devs.
I assume that set of signals will slowly be updated. If one of those ends up being "Any Use of AI At All" is still an open question, depending on if the promised hype actually ends up meeting capability as much as anything.
Vibe coding democratized shipping without democratizing the accountability. The 18,000 users absorbed the downside of a risk they didn't know they were taking.
I don't think you know what democracy means, democracy means that users can reject poorly made apps. If you can't reject or destroy something, it's not a democratic process.
Having someone dump shitty wares onto the public is only democracy if you think being held unaccountable as democratic.
One of the meanings of the word "democratization" is "the action of making something accessible to everyone", which is clearly the sense meant here.
With the power of LLMs anyone can make and sell foot guns.
I've been thinking a bit about how to do security well with my generated code. I've been using tools that check deps for CVEs, static tools that check for sql injection and similar problems, and baking some security requirements into the specs I hand claude. I can't tell yet if this is better than what I did before or just theater. It seems like in this case you'd need/want to specify some tests around access.
I'm interested to hear how other people approach this.
Ask the LLM to create for you a POC for the vulnerability you have in mind. Last time I did this I had to repeatedly make a promise to the LLM that it was for educational purposes as it assumed this information is "dangerous".
Same way you handle preserving any other property you want to preserve while "vibecoding" -- ensure tests capture it, ensure the tests can't be skipped. It really is this simple.
> One example of this was a malformed authentication function. The AI that vibe-coded the Supabase backend, which uses remote procedure calls, implemented it with flawed access control logic, essentially blocking authenticated users and allowing access to unauthenticated users.
Actually sounds like a typical mistake a human developer would make. Forget a `!` or get confused for a second about whether you want true or false returned, and the logic flips.
The difference is a human is more likely to actually test the output of the change.