Even a hypothetically true artificial general intelligence would still not be a moral agent
That’s a deep rabbit hole that can’t be stated as a known fact. It’s absolutely true right now with LLMs, but at some point the line could be crossed. If and when, how, and by what definition has been a long debate nowhere near resolved.
It’s highly possible that AGI/ASI could come about and be both super intelligent and self conscious and still have no sense of morality. But how can we at human levels even comprehend what’s possible? There’s the real danger, we have no idea what we could be heading towards.
I am familiar with the problems that come with companies who put up traffic light cameras, fudging the parameters to make it catch much more than the blatant running of lights. We had them in our area and they were later removed for that very reason. We don’t have cameras for speeding though, so I’m not aware of problems if they’re set for speeders that are well over the limit (so you don’t trigger a ticket for 66 in a 65).