Just commented this elsewhere but my takes on cybersecurity today: Its about to blow up in high demand with so many skiddies being able to hack anybody with an LLM. We are seeing an increase in websites, systems and companies being compromised at an alarming rate. I suspect one of these days we will see a headline of a compromise that will shock and horrify us all. Anyone sleeping on cyber security is a ticking timebomb.
Honestly, if you wanted to make a YC company today that targets AI in a meaningdful way, I'd say make it focused on cyber security analysis. ;)
> I suspect one of these days we will see a headline of a compromise that will shock and horrify us all
But we've had the shock headlines already, and nothing changes. We've seen hospitals get hit that had real-life consequences for patients, the entirety of US citizens SSNs have been breached multiple times now. Passwords as a concept are basically obsolete now. There's even more.
That bomb has already been going off.
If anything I'm seeing the opposite. Companies are throwing security to the wind to go all in on AgEnTiC AI.
If we want change irt cybersecurity, then there needs to start being real consequences for a breach. Not just free credit monitoring. The companies that are proven to be negligent should face actual financial & criminal consequences.
I am building in cybersec space. I dont think you even need script kiddies now. Internal employees run dangerous bad ops with AI that itself is a cybersec nightmare.
Whenever I tell people I work in computer security, their first question is "are you worried about AI taking your job"? To which I just laugh and respond "AI is job security"
It really is! AI will only help you if anything, you aren't worried about AI giving you bad code, just bad answers, which you would validate anyway. I think the other area where AI could be interesting, and I don't hear much buzz about it is, during outages, if it can query all online systems and logs in your cloud, it could probably triage it faster than an entire outage team could in theory anyway. Surprised nobodys built such a system yet. ;)
I mean it in the sense that AI security hype and the larger geopolitical environment has woken up a lot of people to the reality that they need to consider security. And the ones that haven't woken up yet will get a wakeup call when they are breached. It also increases the demand for real security expertise, which is already scarce.
Also, in my niche (hardware and embedded product security), AI doesn't a have a functional impact to the work except in code analysis, but even that is difficult given the level of abstraction these systems are built at.
That's fair, though even that could just be a matter of time, as people build tools that interface LLMs to the physical world. I wonder how something like Bus Pirate could be used with an LLM (maybe a more powerful version of it?) to grok and poke hardware all over the place.
Companies don't fundamentally care about cybersecurity. Most of them see cybersecurity as being similar to waste management; it's not something you get excited about. Sure, your company _must_ have a waste management plan, but it only exists out of pure necessity. It's required to do the real work of the company, but if you had a magic wand and never had to deal with it, you'd choose that option. And, like waste management, plenty of companies outsource their cybersecurity, since it's cheaper and they don't really care about it.
The industry culture related to security work and career paths seem just f'd up.
Instead of ensuring we build systems with robust foundations, people end up in a swamp of frustrating roles like SOC staff chasing alarms about false positives all day, peddling ineffective add-on security products, management CISO roles where you're expected to take responsibility of existing insecure Microsoft etc infrastructure without power to change things, working on demotivating compliance bureucracy that don't actually improve security.
I'd argue work on meaningful security improvements is mostly available outside industry security roles.
With Claude writing so much of the software in big companies, Anthropic is well-positioned to eat up SAST, DAST and a lot of the supply chain analysis. EDR and proactive security are still going to be massive businesses, however.
"Show me the incentives, and I'll show you the outcomes." - Charlie Munger
Right now, if you have a security breach, at least in the US, you send out a letter telling the person that their data could be God-knows-where and offer them two free years of credit monitoring. Victims aren't going to really use that because it's essentially useless. If they've got absolutely, positively nothing better to do with their time, I guess you could file a lawsuit. Who knows what the outcome would be. Probably not in their favor.
In other words, it's cheaper for them to overwork the InfoSec guys/gals and barely care about what is happening outside of day-to-day operations, than it is to really secure their stuff. So they don't spend that money.
If you saw corporate valuation-cratering fines being implemented - the kind that would end the c-suite's careers and bring shame to their family lines for seven generations - I bet that they'd start catering lunches for the InfoSec team.
You could also create an AI tool to help generate letters to lawmakers about how they need to make a real dent in this between reruns of Matlock in the retirement home.
> offer them two free years of credit monitoring. Victims aren't going to really use that because it's essentially useless
It's generally actively harmful, and the CRAs fight for this business from breaches because universally, to accept the free credit monitoring you have to sign up for their highest tier credit monitoring package (which can be up to $50/month), supply a credit card, and then hope to remember, a year later, to cancel at the end of the free period, because at that point they'll convert you to a paying customer.
I don't think fines are enough of an incentive. They're too easy to evade and insufficiently consequential to the people who are actually shipping code. Moreover, making them enormous (as you put it well "valuation-cratering") unfairly punishes people who are not directly responsible for the failure. Instead, like in other engineering disciplines, Engineers need to be personally liable for the consequences of failure. Not necessarily every engineer--not every mechanical engineer needs to be a P.E.--but someone directly responsible for the quality of the work needs to stake their reputation on it, and suffer the consequences when it fails.
In practice this would mean that you need to show conformance to some kind of security process. The actual outcome of that process is of secondary importance as long as you can show that you’re compliant. Very carefully written process documents _can_ improve things, but my confidence in security processes is low for companies without intrinsic motivation.
I think one can reasonably argue that sufficiently large fines that don’t have a „but we followed iso-xyz“ loophole could produce better outcomes. The difficult part is making the companies care about existential tail risks.
Companies are already following a bunch of standards like SOX, SOC2, HIPAA, etc., and documenting their adherence to checking all of the boxes, but incidents still happen every week.
Yes, it'll generate a lot of super annoying paperwork. But, hopefully, it will also tighten up software engineering standards. It has worked well in other disciplines.
There already are areas where such standards exist, eg safety critical applications in aviation. Arguably the defect rate there _is_ lower, but I still think that this method for achieving this is quite inefficient. And I think that writing aviation software that doesn’t crash is a lot easier to define a process for than for writing software that is difficult to hack.
The missing piece is the requirement for a certified Professional Engineer to sign off on the system. That decouples the incentives from the corporate objectives, and makes it personal. We need that kind of professional accountability in software, otherwise it'll continue to be bad.
Yep. I had a chance to go for a cybersecurity degree. And every time ive looked at that, the career path is basically an applied insurance job.
Cybersecurity does not make money. They do not raise the profit for a company. Instead, they are compliance, contractual, and legal defences to repel lawsuits and keep data boundaries clean.
And who's the first to go? Groups that dont make money. Like cybersec.
Cybersecurity certainly makes money. The good ones make a lot. I mean a lot.
But if you think you can just study for a year and get some security certificates and call it a day, you're going to be sorely disappointed in the compensation.
Just commented this elsewhere but my takes on cybersecurity today: Its about to blow up in high demand with so many skiddies being able to hack anybody with an LLM. We are seeing an increase in websites, systems and companies being compromised at an alarming rate. I suspect one of these days we will see a headline of a compromise that will shock and horrify us all. Anyone sleeping on cyber security is a ticking timebomb.
Honestly, if you wanted to make a YC company today that targets AI in a meaningdful way, I'd say make it focused on cyber security analysis. ;)
> I suspect one of these days we will see a headline of a compromise that will shock and horrify us all
But we've had the shock headlines already, and nothing changes. We've seen hospitals get hit that had real-life consequences for patients, the entirety of US citizens SSNs have been breached multiple times now. Passwords as a concept are basically obsolete now. There's even more.
That bomb has already been going off.
If anything I'm seeing the opposite. Companies are throwing security to the wind to go all in on AgEnTiC AI.
If we want change irt cybersecurity, then there needs to start being real consequences for a breach. Not just free credit monitoring. The companies that are proven to be negligent should face actual financial & criminal consequences.
I am building in cybersec space. I dont think you even need script kiddies now. Internal employees run dangerous bad ops with AI that itself is a cybersec nightmare.
Whenever I tell people I work in computer security, their first question is "are you worried about AI taking your job"? To which I just laugh and respond "AI is job security"
It really is! AI will only help you if anything, you aren't worried about AI giving you bad code, just bad answers, which you would validate anyway. I think the other area where AI could be interesting, and I don't hear much buzz about it is, during outages, if it can query all online systems and logs in your cloud, it could probably triage it faster than an entire outage team could in theory anyway. Surprised nobodys built such a system yet. ;)
I mean it in the sense that AI security hype and the larger geopolitical environment has woken up a lot of people to the reality that they need to consider security. And the ones that haven't woken up yet will get a wakeup call when they are breached. It also increases the demand for real security expertise, which is already scarce.
Also, in my niche (hardware and embedded product security), AI doesn't a have a functional impact to the work except in code analysis, but even that is difficult given the level of abstraction these systems are built at.
That's fair, though even that could just be a matter of time, as people build tools that interface LLMs to the physical world. I wonder how something like Bus Pirate could be used with an LLM (maybe a more powerful version of it?) to grok and poke hardware all over the place.
Do you think that AI helps security offense more than defense? It's not obvious to me that it does.
Companies don't fundamentally care about cybersecurity. Most of them see cybersecurity as being similar to waste management; it's not something you get excited about. Sure, your company _must_ have a waste management plan, but it only exists out of pure necessity. It's required to do the real work of the company, but if you had a magic wand and never had to deal with it, you'd choose that option. And, like waste management, plenty of companies outsource their cybersecurity, since it's cheaper and they don't really care about it.
The industry culture related to security work and career paths seem just f'd up.
Instead of ensuring we build systems with robust foundations, people end up in a swamp of frustrating roles like SOC staff chasing alarms about false positives all day, peddling ineffective add-on security products, management CISO roles where you're expected to take responsibility of existing insecure Microsoft etc infrastructure without power to change things, working on demotivating compliance bureucracy that don't actually improve security.
I'd argue work on meaningful security improvements is mostly available outside industry security roles.
With Claude writing so much of the software in big companies, Anthropic is well-positioned to eat up SAST, DAST and a lot of the supply chain analysis. EDR and proactive security are still going to be massive businesses, however.
"Show me the incentives, and I'll show you the outcomes." - Charlie Munger
Right now, if you have a security breach, at least in the US, you send out a letter telling the person that their data could be God-knows-where and offer them two free years of credit monitoring. Victims aren't going to really use that because it's essentially useless. If they've got absolutely, positively nothing better to do with their time, I guess you could file a lawsuit. Who knows what the outcome would be. Probably not in their favor.
In other words, it's cheaper for them to overwork the InfoSec guys/gals and barely care about what is happening outside of day-to-day operations, than it is to really secure their stuff. So they don't spend that money.
If you saw corporate valuation-cratering fines being implemented - the kind that would end the c-suite's careers and bring shame to their family lines for seven generations - I bet that they'd start catering lunches for the InfoSec team.
> "Show me the incentives, and I'll show you the outcomes." - Charlie Munger
Also note that -like pharmaceutical companies- treatment is more profitable than cure for infosec consultants.
New idea: AI tool to help generate legal letters to companies after they leak data to cause them maximum inconvenience.
The human speed legal system would become collateral damage.
You could also create an AI tool to help generate letters to lawmakers about how they need to make a real dent in this between reruns of Matlock in the retirement home.
> offer them two free years of credit monitoring. Victims aren't going to really use that because it's essentially useless
It's generally actively harmful, and the CRAs fight for this business from breaches because universally, to accept the free credit monitoring you have to sign up for their highest tier credit monitoring package (which can be up to $50/month), supply a credit card, and then hope to remember, a year later, to cancel at the end of the free period, because at that point they'll convert you to a paying customer.
I don't think fines are enough of an incentive. They're too easy to evade and insufficiently consequential to the people who are actually shipping code. Moreover, making them enormous (as you put it well "valuation-cratering") unfairly punishes people who are not directly responsible for the failure. Instead, like in other engineering disciplines, Engineers need to be personally liable for the consequences of failure. Not necessarily every engineer--not every mechanical engineer needs to be a P.E.--but someone directly responsible for the quality of the work needs to stake their reputation on it, and suffer the consequences when it fails.
In practice this would mean that you need to show conformance to some kind of security process. The actual outcome of that process is of secondary importance as long as you can show that you’re compliant. Very carefully written process documents _can_ improve things, but my confidence in security processes is low for companies without intrinsic motivation.
I think one can reasonably argue that sufficiently large fines that don’t have a „but we followed iso-xyz“ loophole could produce better outcomes. The difficult part is making the companies care about existential tail risks.
Companies are already following a bunch of standards like SOX, SOC2, HIPAA, etc., and documenting their adherence to checking all of the boxes, but incidents still happen every week.
Yes, it'll generate a lot of super annoying paperwork. But, hopefully, it will also tighten up software engineering standards. It has worked well in other disciplines.
There already are areas where such standards exist, eg safety critical applications in aviation. Arguably the defect rate there _is_ lower, but I still think that this method for achieving this is quite inefficient. And I think that writing aviation software that doesn’t crash is a lot easier to define a process for than for writing software that is difficult to hack.
The missing piece is the requirement for a certified Professional Engineer to sign off on the system. That decouples the incentives from the corporate objectives, and makes it personal. We need that kind of professional accountability in software, otherwise it'll continue to be bad.
It is my understanding that personal responsibility already exists in safety critical software development.
Yep. I had a chance to go for a cybersecurity degree. And every time ive looked at that, the career path is basically an applied insurance job.
Cybersecurity does not make money. They do not raise the profit for a company. Instead, they are compliance, contractual, and legal defences to repel lawsuits and keep data boundaries clean.
And who's the first to go? Groups that dont make money. Like cybersec.
Cybersecurity certainly makes money. The good ones make a lot. I mean a lot.
But if you think you can just study for a year and get some security certificates and call it a day, you're going to be sorely disappointed in the compensation.
OP means internal to a company security ops don't generate add on revenue. Cybersec definitely adds revenue to services and provider companies.