I dont know if its "evidence of AI" so much as "Evidence of laziness causing extreme public embarrassment"
Every good AI policy is basically:
1. You may use <supported LLM with enterprise data agreement>
2. You are still responsible for the quality of your output, customer facing embarrassment is your fault and will not be attributed to the technology.
In this case, the LLM was used to generate a reference table.
>“It seems that these references were generated and attached to the document after the fact, as they are not cited in the body of the text.
Like its just a retrospective justification for the content they have written, its not lazy editing it implies a complete lack of research, while fraudulently trying to imply the research was completed.
These suspensions send the appropriate message. This isn't the same thing as poorly reviewed marketing copy, hallucinations in government policy papers are unacceptable.
These people are employed to serve the public and are paid by public funds. This is a socially critical job which affects people's entire lives, and in South Africa possibly their personal safety. This isn't just another coporation who needs to make line go up.
The wording of the article suggests that large parts of the documents where false and should have been caught by review, for which these two director-level people were responsible. This seems to be more than just editing which was "a bit sloppy".
I suggest if you were an immigrant whose citizenship application was denied based on an AI hallucination, forcing you to uproot and move your family out of the country against your will, you would not appreciate that and would take a different view.
> Maybe someday when there's been enough such reports people will shrug like they do about security breaches now.
Yes, it's a real danger that it becomes a whole shift downward for society. We stop objecting to errors and mediocrity because they've become so normalized.
People who are paying even a slight bit of attention understand and anticipate the correlation between AI and slop/hallucination. There's a reason those terms have emerged. And there aren't corresponding terms for AI success/quality.
The entire world sells products to encourage you to do your work with AI assistance.
But god forbid that there should be any evidence of that in your .....work. You'll be suspended or fired.
Holy god, it looks like someone used AI and were a bit sloppy in their editing!!!! YOU'RE FIRED!
Maybe someday when there's been enough such reports people will shrug like they do about security breaches now.
I dont know if its "evidence of AI" so much as "Evidence of laziness causing extreme public embarrassment"
Every good AI policy is basically:
1. You may use <supported LLM with enterprise data agreement>
2. You are still responsible for the quality of your output, customer facing embarrassment is your fault and will not be attributed to the technology.
In this case, the LLM was used to generate a reference table.
>“It seems that these references were generated and attached to the document after the fact, as they are not cited in the body of the text.
Like its just a retrospective justification for the content they have written, its not lazy editing it implies a complete lack of research, while fraudulently trying to imply the research was completed.
These suspensions send the appropriate message. This isn't the same thing as poorly reviewed marketing copy, hallucinations in government policy papers are unacceptable.
These people are employed to serve the public and are paid by public funds. This is a socially critical job which affects people's entire lives, and in South Africa possibly their personal safety. This isn't just another coporation who needs to make line go up.
The wording of the article suggests that large parts of the documents where false and should have been caught by review, for which these two director-level people were responsible. This seems to be more than just editing which was "a bit sloppy".
I suggest if you were an immigrant whose citizenship application was denied based on an AI hallucination, forcing you to uproot and move your family out of the country against your will, you would not appreciate that and would take a different view.
> Maybe someday when there's been enough such reports people will shrug like they do about security breaches now.
Yes, it's a real danger that it becomes a whole shift downward for society. We stop objecting to errors and mediocrity because they've become so normalized.
God forbid people actually have to do work and fact-check the hallucination machines!
You're correct - whether you keep your job depends on how well you conceal that you used AI.
I don't think most people care if you used AI or not, as long as it's correct. AI or no AI, incorrect and false stuff makes people tired of you.
People who are paying even a slight bit of attention understand and anticipate the correlation between AI and slop/hallucination. There's a reason those terms have emerged. And there aren't corresponding terms for AI success/quality.