very interesting! This feels like one of the most effective possible tools in the "fight" against AI - insurance companies have a lot of sway and it says a lot that several large companies are looking into carving out exclusions for AI usage.
Agreed. This is the firewall rule against the 'wild west' climate of AI that I would have expected to kick in much earlier than this; and I wonder if any presidential edict can brute-force past this obstacle.
This is bigger deal than it seems. Risk bureaucracies are hugely influential in terms of how companies/debt/equity is valued. They also tend to grow over time. This is probably a huge growth opportunity for insurance and a rock solid growth ceiling for AI use in certain industries. This is also not something that will go away. There is pretty much no political maneuvering or marketing or industry growth that will suppress it. This will lead to forced AI disclosures and insurance defined best practices that will likely not allow "hands-off" AI output without user sign off. Paradoxically this might create a moat for bigger AI firms who can keep up with the requirements.
Seems like this could also be a big effect on the massive data center buildout.
> exclusion WR Berkley proposed would bar claims involving “any actual or alleged use” of AI, including any product or service sold by a company “incorporating” the technology.
Read liberally this would make every AI data center un-insurable. If it goes towards issues like you cannot actually insure the facility, insure the hardware involved, and insure related ideas, then it starts to be a really serious financial issue from a company risk perspective.
Much like housing, there's a lot of areas where you simply cannot build without proper insurance that will cover likely claims.
You seem more familiar with the space than I am so I figured I'd ask - do you think this would be addressed with the proposed compliance framework AIUC-1? The article mentioned the startup behind it (Artificial Intelligence Underwriting Company). I don't know enough about AI usage or implementation to be able to evaluate if the proposed framework would actually make AI usage more dependable but I could see insurance companies requiring (AIUC-1 or other)-compliance for coverage
It’s pretty easy to do it now if you want, you’re just going to have a hard time demonstrating harm for the 38th time your information has been exposed.
very interesting! This feels like one of the most effective possible tools in the "fight" against AI - insurance companies have a lot of sway and it says a lot that several large companies are looking into carving out exclusions for AI usage.
Agreed. This is the firewall rule against the 'wild west' climate of AI that I would have expected to kick in much earlier than this; and I wonder if any presidential edict can brute-force past this obstacle.
This is bigger deal than it seems. Risk bureaucracies are hugely influential in terms of how companies/debt/equity is valued. They also tend to grow over time. This is probably a huge growth opportunity for insurance and a rock solid growth ceiling for AI use in certain industries. This is also not something that will go away. There is pretty much no political maneuvering or marketing or industry growth that will suppress it. This will lead to forced AI disclosures and insurance defined best practices that will likely not allow "hands-off" AI output without user sign off. Paradoxically this might create a moat for bigger AI firms who can keep up with the requirements.
Seems like this could also be a big effect on the massive data center buildout.
> exclusion WR Berkley proposed would bar claims involving “any actual or alleged use” of AI, including any product or service sold by a company “incorporating” the technology.
Read liberally this would make every AI data center un-insurable. If it goes towards issues like you cannot actually insure the facility, insure the hardware involved, and insure related ideas, then it starts to be a really serious financial issue from a company risk perspective.
Much like housing, there's a lot of areas where you simply cannot build without proper insurance that will cover likely claims.
You seem more familiar with the space than I am so I figured I'd ask - do you think this would be addressed with the proposed compliance framework AIUC-1? The article mentioned the startup behind it (Artificial Intelligence Underwriting Company). I don't know enough about AI usage or implementation to be able to evaluate if the proposed framework would actually make AI usage more dependable but I could see insurance companies requiring (AIUC-1 or other)-compliance for coverage
Non-paywall link: https://archive.md/SpAV5
I”m still waiting for the day when it’s easy for the consumer to sue a company in the event of a data breach.
It’s pretty easy to do it now if you want, you’re just going to have a hard time demonstrating harm for the 38th time your information has been exposed.
Paywall